diff --git a/doc/pub/week3/html/._week3-bs000.html b/doc/pub/week3/html/._week3-bs000.html index 59267aee..ba242581 100644 --- a/doc/pub/week3/html/._week3-bs000.html +++ b/doc/pub/week3/html/._week3-bs000.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -167,7 +291,7 @@

Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importa
-

February 6-10

+

February 2


@@ -192,7 +316,7 @@

February 6-10

  • 9
  • 10
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • @@ -206,7 +330,7 @@

    February 6-10

    -->
    - © 1999-2023, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license + © 1999-2024, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license
    diff --git a/doc/pub/week3/html/._week3-bs001.html b/doc/pub/week3/html/._week3-bs001.html index 0361956f..b065dd83 100644 --- a/doc/pub/week3/html/._week3-bs001.html +++ b/doc/pub/week3/html/._week3-bs001.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,12 +273,12 @@

     

     

     

    -

    Overview of week 5

    +

    Overview of week 5, January 29-February 2

    @@ -168,7 +292,6 @@

    Overview of week 5

  • Overview video on Metropolis algoritm
  • Video of lecture tba
  • Handwritten notes tba
  • -
  • See also Lectures from FYS3150/4150 on the Metropolis Algorithm
  • @@ -190,7 +313,7 @@

    Overview of week 5

  • 10
  • 11
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs002.html b/doc/pub/week3/html/._week3-bs002.html index a1bc757f..507f2c89 100644 --- a/doc/pub/week3/html/._week3-bs002.html +++ b/doc/pub/week3/html/._week3-bs002.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,16 +273,23 @@

     

     

     

    -

    Basics of the Metropolis Algorithm

    +

    Importance Sampling: Overview of what needs to be coded

    -

    The Metropolis et al. -algorithm was invented by Metropolis et. a -and is often simply called the Metropolis algorithm. -It is a method to sample a normalized probability -distribution by a stochastic process. We define \( {\cal P}_i^{(n)} \) to -be the probability for finding the system in the state \( i \) at step \( n \). -The algorithm is then +

    +
    + + +

    For a diffusion process characterized by a time-dependent probability density \( P(x,t) \) in one dimension the Fokker-Planck +equation reads (for one particle /walker)

    +$$ + \frac{\partial P}{\partial t} = D\frac{\partial }{\partial x}\left(\frac{\partial }{\partial x} -F\right)P(x,t), +$$ + +

    where \( F \) is a drift term and \( D \) is the diffusion coefficient.

    +
    +
    +

    @@ -177,7 +308,7 @@

    Basics of the Metropo
  • 11
  • 12
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs003.html b/doc/pub/week3/html/._week3-bs003.html index 040f8567..97975656 100644 --- a/doc/pub/week3/html/._week3-bs003.html +++ b/doc/pub/week3/html/._week3-bs003.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,20 +273,30 @@

     

     

     

    -

    The basic of the Metropolis Algorithm

    +

    Importance sampling

    +
    +
    + +

    The new positions in coordinate space are given as the solutions of the Langevin equation using Euler's method, namely, +we go from the Langevin equation +

    +$$ + \frac{\partial x(t)}{\partial t} = DF(x(t)) +\eta, +$$ - -

    We wish to derive the required properties of \( T \) and \( A \) such that -\( {\cal P}_i^{(n\rightarrow \infty)} \rightarrow p_i \) so that starting -from any distribution, the method converges to the correct distribution. -Note that the description here is for a discrete probability distribution. -Replacing probabilities \( p_i \) with expressions like \( p(x_i)dx_i \) will -take all of these over to the corresponding continuum expressions. +

    with \( \eta \) a random variable, +yielding a new position +

    +$$ + y = x+DF(x)\Delta t +\xi\sqrt{\Delta t}, +$$ + +

    where \( \xi \) is gaussian random variable and \( \Delta t \) is a chosen time step. +The quantity \( D \) is, in atomic units, equal to \( 1/2 \) and comes from the factor \( 1/2 \) in the kinetic energy operator. Note that \( \Delta t \) is to be viewed as a parameter. Values of \( \Delta t \in [0.001,0.01] \) yield in general rather stable values of the ground state energy.

    +
    +
    +

    @@ -182,7 +316,7 @@

    The basic of the M
  • 12
  • 13
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs004.html b/doc/pub/week3/html/._week3-bs004.html index 2597e197..9a11a696 100644 --- a/doc/pub/week3/html/._week3-bs004.html +++ b/doc/pub/week3/html/._week3-bs004.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,24 +273,21 @@

     

     

     

    -

    More on the Metropolis

    +

    Importance sampling

    +
    +
    + +

    The process of isotropic diffusion characterized by a time-dependent probability density \( P(\mathbf{x},t) \) obeys (as an approximation) the so-called Fokker-Planck equation

    +$$ + \frac{\partial P}{\partial t} = \sum_i D\frac{\partial }{\partial \mathbf{x_i}}\left(\frac{\partial }{\partial \mathbf{x_i}} -\mathbf{F_i}\right)P(\mathbf{x},t), +$$ -

    The dynamical equation for \( {\cal P}_i^{(n)} \) can be written directly from -the description above. The probability of being in the state \( i \) at step \( n \) -is given by the probability of being in any state \( j \) at the previous step, -and making an accepted transition to \( i \) added to the probability of -being in the state \( i \), making a transition to any state \( j \) and -rejecting the move: -

    +

    where \( \mathbf{F_i} \) is the \( i^{th} \) component of the drift term (drift velocity) caused by an external potential, and \( D \) is the diffusion coefficient. The convergence to a stationary probability density can be obtained by setting the left hand side to zero. The resulting equation will be satisfied if and only if all the terms of the sum are equal zero,

    $$ -\begin{equation} -\tag{1} -{\cal P}^{(n)}_i = \sum_j \left [ -{\cal P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i} -+{\cal P}^{(n-1)}_iT_{i\rightarrow j}\left ( 1- A_{i\rightarrow j} \right) -\right ] \,. -\end{equation} +\frac{\partial^2 P}{\partial {\mathbf{x_i}^2}} = P\frac{\partial}{\partial {\mathbf{x_i}}}\mathbf{F_i} + \mathbf{F_i}\frac{\partial}{\partial {\mathbf{x_i}}}P. $$ +
    +

    @@ -188,7 +309,7 @@

    More on the Metropolis

  • 13
  • 14
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs005.html b/doc/pub/week3/html/._week3-bs005.html index daa435e2..81571ba2 100644 --- a/doc/pub/week3/html/._week3-bs005.html +++ b/doc/pub/week3/html/._week3-bs005.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,22 +273,24 @@

     

     

     

    -

    Metropolis algorithm, setting it up

    -

    Since the probability of making some transition must be 1, -\( \sum_j T_{i\rightarrow j} = 1 \), and Eq. (1) becomes -

    +

    Importance sampling

    +
    +
    + +

    The drift vector should be of the form \( \mathbf{F} = g(\mathbf{x}) \frac{\partial P}{\partial \mathbf{x}} \). Then,

    +$$ +\frac{\partial^2 P}{\partial {\mathbf{x_i}^2}} = P\frac{\partial g}{\partial P}\left( \frac{\partial P}{\partial {\mathbf{x}_i}} \right)^2 + P g \frac{\partial ^2 P}{\partial {\mathbf{x}_i^2}} + g \left( \frac{\partial P}{\partial {\mathbf{x}_i}} \right)^2. +$$ +

    The condition of stationary density means that the left hand side equals zero. In other words, the terms containing first and second derivatives have to cancel each other. It is possible only if \( g = \frac{1}{P} \), which yields

    $$ -\begin{equation} -{\cal P}^{(n)}_i = {\cal P}^{(n-1)}_i + - \sum_j \left [ -{\cal P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i} --{\cal P}^{(n-1)}_iT_{i\rightarrow j}A_{i\rightarrow j} -\right ] \,. -\tag{2} -\end{equation} +\mathbf{F} = 2\frac{1}{\Psi_T}\nabla\Psi_T, $$ +

    which is known as the so-called quantum force. This term is responsible for pushing the walker towards regions of configuration space where the trial wave function is large, increasing the efficiency of the simulation in contrast to the Metropolis algorithm where the walker has the same probability of moving in every direction.

    +
    +
    +

    @@ -186,7 +312,7 @@

    Metropolis algorithm,
  • 14
  • 15
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs006.html b/doc/pub/week3/html/._week3-bs006.html index ffc9160e..5eb2a99d 100644 --- a/doc/pub/week3/html/._week3-bs006.html +++ b/doc/pub/week3/html/._week3-bs006.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,20 +273,26 @@

     

     

     

    -

    Metropolis continues

    +

    Importance sampling

    +
    +
    + +

    The Fokker-Planck equation yields a (the solution to the equation) transition probability given by the Green's function

    +$$ + G(y,x,\Delta t) = \frac{1}{(4\pi D\Delta t)^{3N/2}} \exp{\left(-(y-x-D\Delta t F(x))^2/4D\Delta t\right)} +$$ -

    For large \( n \) we require that \( {\cal P}^{(n\rightarrow \infty)}_i = p_i \), -the desired probability distribution. Taking this limit, gives the -balance requirement -

    +

    which in turn means that our brute force Metropolis algorithm

    +$$ + A(y,x) = \mathrm{min}(1,q(y,x))), +$$ +

    with \( q(y,x) = |\Psi_T(y)|^2/|\Psi_T(x)|^2 \) is now replaced by the Metropolis-Hastings algorithm as well as Hasting's article,

    $$ -\begin{equation} -\sum_j \left [p_jT_{j\rightarrow i} A_{j\rightarrow i}-p_iT_{i\rightarrow j}A_{i\rightarrow j} -\right ] = 0, -\tag{3} -\end{equation} +q(y,x) = \frac{G(x,y,\Delta t)|\Psi_T(y)|^2}{G(y,x,\Delta t)|\Psi_T(x)|^2} $$ +
    +

    @@ -186,7 +316,7 @@

    Metropolis continues

  • 15
  • 16
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs007.html b/doc/pub/week3/html/._week3-bs007.html index da19fa4a..9a701576 100644 --- a/doc/pub/week3/html/._week3-bs007.html +++ b/doc/pub/week3/html/._week3-bs007.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,21 +273,264 @@

     

     

     

    -

    Detailed Balance

    +

    Code example for the interacting case with importance sampling

    + +

    We are now ready to implement importance sampling. This is done here for the two-electron case with the Coulomb interaction, as in the previous example. We have two variational parameters \( \alpha \) and \( \beta \). After the set up of files

    + + + +
    +
    +
    +
    +
    +
    # Common imports
    +import os
    +
    +# Where to save the figures and data files
    +PROJECT_ROOT_DIR = "Results"
    +FIGURE_ID = "Results/FigureFiles"
    +DATA_ID = "Results/VMCQdotImportance"
    +
    +if not os.path.exists(PROJECT_ROOT_DIR):
    +    os.mkdir(PROJECT_ROOT_DIR)
    +
    +if not os.path.exists(FIGURE_ID):
    +    os.makedirs(FIGURE_ID)
    +
    +if not os.path.exists(DATA_ID):
    +    os.makedirs(DATA_ID)
    +
    +def image_path(fig_id):
    +    return os.path.join(FIGURE_ID, fig_id)
    +
    +def data_path(dat_id):
    +    return os.path.join(DATA_ID, dat_id)
     
    -

    The balance requirement is very weak. Typically the much stronger detailed -balance requirement is enforced, that is rather than the sum being -set to zero, we set each term separately to zero and use this -to determine the acceptance probabilities. Rearranging, the result is -

    +def save_fig(fig_id): + plt.savefig(image_path(fig_id) + ".png", format='png') -$$ -\begin{equation} -\frac{ A_{j\rightarrow i}}{A_{i\rightarrow j}} -= \frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}} \,. -\tag{4} -\end{equation} -$$ +outfile = open(data_path("VMCQdotImportance.dat"),'w') +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    + +

    we move on to the set up of the trial wave function, the analytical expression for the local energy and the analytical expression for the quantum force.

    + + +
    +
    +
    +
    +
    +
    # 2-electron VMC code for 2dim quantum dot with importance sampling
    +# Using gaussian rng for new positions and Metropolis- Hastings 
    +# No energy minimization
    +from math import exp, sqrt
    +from random import random, seed, normalvariate
    +import numpy as np
    +import matplotlib.pyplot as plt
    +from mpl_toolkits.mplot3d import Axes3D
    +from matplotlib import cm
    +from matplotlib.ticker import LinearLocator, FormatStrFormatter
    +import sys
    +
    +
    +# Trial wave function for the 2-electron quantum dot in two dims
    +def WaveFunction(r,alpha,beta):
    +    r1 = r[0,0]**2 + r[0,1]**2
    +    r2 = r[1,0]**2 + r[1,1]**2
    +    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    +    deno = r12/(1+beta*r12)
    +    return exp(-0.5*alpha*(r1+r2)+deno)
    +
    +# Local energy  for the 2-electron quantum dot in two dims, using analytical local energy
    +def LocalEnergy(r,alpha,beta):
    +    
    +    r1 = (r[0,0]**2 + r[0,1]**2)
    +    r2 = (r[1,0]**2 + r[1,1]**2)
    +    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    +    deno = 1.0/(1+beta*r12)
    +    deno2 = deno*deno
    +    return 0.5*(1-alpha*alpha)*(r1 + r2) +2.0*alpha + 1.0/r12+deno2*(alpha*r12-deno2+2*beta*deno-1.0/r12)
    +
    +# Setting up the quantum force for the two-electron quantum dot, recall that it is a vector
    +def QuantumForce(r,alpha,beta):
    +
    +    qforce = np.zeros((NumberParticles,Dimension), np.double)
    +    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    +    deno = 1.0/(1+beta*r12)
    +    qforce[0,:] = -2*r[0,:]*alpha*(r[0,:]-r[1,:])*deno*deno/r12
    +    qforce[1,:] = -2*r[1,:]*alpha*(r[1,:]-r[0,:])*deno*deno/r12
    +    return qforce
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    + +

    The Monte Carlo sampling includes now the Metropolis-Hastings algorithm, with the additional complication of having to evaluate the quantum force and the Green's function which is the solution of the Fokker-Planck equation.

    + + + +
    +
    +
    +
    +
    +
    # The Monte Carlo sampling with the Metropolis algo
    +def MonteCarloSampling():
    +
    +    NumberMCcycles= 100000
    +    # Parameters in the Fokker-Planck simulation of the quantum force
    +    D = 0.5
    +    TimeStep = 0.05
    +    # positions
    +    PositionOld = np.zeros((NumberParticles,Dimension), np.double)
    +    PositionNew = np.zeros((NumberParticles,Dimension), np.double)
    +    # Quantum force
    +    QuantumForceOld = np.zeros((NumberParticles,Dimension), np.double)
    +    QuantumForceNew = np.zeros((NumberParticles,Dimension), np.double)
    +
    +    # seed for rng generator 
    +    seed()
    +    # start variational parameter  loops, two parameters here
    +    alpha = 0.9
    +    for ia in range(MaxVariations):
    +        alpha += .025
    +        AlphaValues[ia] = alpha
    +        beta = 0.2 
    +        for jb in range(MaxVariations):
    +            beta += .01
    +            BetaValues[jb] = beta
    +            energy = energy2 = 0.0
    +            DeltaE = 0.0
    +            #Initial position
    +            for i in range(NumberParticles):
    +                for j in range(Dimension):
    +                    PositionOld[i,j] = normalvariate(0.0,1.0)*sqrt(TimeStep)
    +            wfold = WaveFunction(PositionOld,alpha,beta)
    +            QuantumForceOld = QuantumForce(PositionOld,alpha, beta)
    +
    +            #Loop over MC MCcycles
    +            for MCcycle in range(NumberMCcycles):
    +                #Trial position moving one particle at the time
    +                for i in range(NumberParticles):
    +                    for j in range(Dimension):
    +                        PositionNew[i,j] = PositionOld[i,j]+normalvariate(0.0,1.0)*sqrt(TimeStep)+\
    +                                           QuantumForceOld[i,j]*TimeStep*D
    +                    wfnew = WaveFunction(PositionNew,alpha,beta)
    +                    QuantumForceNew = QuantumForce(PositionNew,alpha, beta)
    +                    GreensFunction = 0.0
    +                    for j in range(Dimension):
    +                        GreensFunction += 0.5*(QuantumForceOld[i,j]+QuantumForceNew[i,j])*\
    +	                              (D*TimeStep*0.5*(QuantumForceOld[i,j]-QuantumForceNew[i,j])-\
    +                                      PositionNew[i,j]+PositionOld[i,j])
    +      
    +                    GreensFunction = exp(GreensFunction)
    +                    ProbabilityRatio = GreensFunction*wfnew**2/wfold**2
    +                    #Metropolis-Hastings test to see whether we accept the move
    +                    if random() <= ProbabilityRatio:
    +                       for j in range(Dimension):
    +                           PositionOld[i,j] = PositionNew[i,j]
    +                           QuantumForceOld[i,j] = QuantumForceNew[i,j]
    +                       wfold = wfnew
    +                DeltaE = LocalEnergy(PositionOld,alpha,beta)
    +                energy += DeltaE
    +                energy2 += DeltaE**2
    +            # We calculate mean, variance and error (no blocking applied)
    +            energy /= NumberMCcycles
    +            energy2 /= NumberMCcycles
    +            variance = energy2 - energy**2
    +            error = sqrt(variance/NumberMCcycles)
    +            Energies[ia,jb] = energy    
    +            outfile.write('%f %f %f %f %f\n' %(alpha,beta,energy,variance,error))
    +    return Energies, AlphaValues, BetaValues
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    + +

    The main part here contains the setup of the variational parameters, the energies and the variance.

    + + +
    +
    +
    +
    +
    +
    #Here starts the main program with variable declarations
    +NumberParticles = 2
    +Dimension = 2
    +MaxVariations = 10
    +Energies = np.zeros((MaxVariations,MaxVariations))
    +AlphaValues = np.zeros(MaxVariations)
    +BetaValues = np.zeros(MaxVariations)
    +(Energies, AlphaValues, BetaValues) = MonteCarloSampling()
    +outfile.close()
    +# Prepare for plots
    +fig = plt.figure()
    +ax = fig.gca(projection='3d')
    +# Plot the surface.
    +X, Y = np.meshgrid(AlphaValues, BetaValues)
    +surf = ax.plot_surface(X, Y, Energies,cmap=cm.coolwarm,linewidth=0, antialiased=False)
    +# Customize the z axis.
    +zmin = np.matrix(Energies).min()
    +zmax = np.matrix(Energies).max()
    +ax.set_zlim(zmin, zmax)
    +ax.set_xlabel(r'$\alpha$')
    +ax.set_ylabel(r'$\beta$')
    +ax.set_zlabel(r'$\langle E \rangle$')
    +ax.zaxis.set_major_locator(LinearLocator(10))
    +ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
    +# Add a color bar which maps values to colors.
    +fig.colorbar(surf, shrink=0.5, aspect=5)
    +save_fig("QdotImportance")
    +plt.show()
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +
    +

    @@ -188,7 +555,7 @@

    Detailed Balance

  • 16
  • 17
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs008.html b/doc/pub/week3/html/._week3-bs008.html index 75983c67..894753bd 100644 --- a/doc/pub/week3/html/._week3-bs008.html +++ b/doc/pub/week3/html/._week3-bs008.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,30 +273,31 @@

     

     

     

    -

    More on Detailed Balance

    - -

    The Metropolis choice is to maximize the \( A \) values, that is

    - +

    Importance sampling, program elements

    +
    +
    + +

    The general derivative formula of the Jastrow factor is (the subscript \( C \) stands for Correlation)

    $$ -\begin{equation} -A_{j \rightarrow i} = \min \left ( 1, -\frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}}\right ). -\tag{5} -\end{equation} +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k} $$ -

    Other choices are possible, but they all correspond to multilplying -\( A_{i\rightarrow j} \) and \( A_{j\rightarrow i} \) by the same constant -smaller than unity. The penalty function method uses just such -a factor to compensate for \( p_i \) that are evaluated stochastically -and are therefore noisy. +

    However, +with our written in way which can be reused later as

    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})= \exp{\left\{\sum_{i < j}f(r_{ij})\right\}}, +$$ -

    Having chosen the acceptance probabilities, we have guaranteed that -if the \( {\cal P}_i^{(n)} \) has equilibrated, that is if it is equal to \( p_i \), -it will remain equilibrated. Next we need to find the circumstances for -convergence to equilibrium. +

    the gradient needed for the quantum force and local energy is easy to compute. +The function \( f(r_{ij}) \) will depends on the system under study. In the equations below we will keep this general form.

    +
    +
    +

    @@ -197,7 +322,7 @@

    More on Detailed Balance

  • 17
  • 18
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs009.html b/doc/pub/week3/html/._week3-bs009.html index 8fd0135c..ee9ce811 100644 --- a/doc/pub/week3/html/._week3-bs009.html +++ b/doc/pub/week3/html/._week3-bs009.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,32 +273,27 @@

     

     

     

    -

    Dynamical Equation

    - -

    The dynamical equation can be written as

    - +

    Importance sampling, program elements

    +
    +
    + +

    In the Metropolis/Hasting algorithm, the acceptance ratio determines the probability for a particle to be accepted at a new position. The ratio of the trial wave functions evaluated at the new and current positions is given by (\( OB \) for the onebody part)

    $$ -\begin{equation} -{\cal P}^{(n)}_i = \sum_j M_{ij}{\cal P}^{(n-1)}_j -\tag{6} -\end{equation} +R \equiv \frac{\Psi_{T}^{new}}{\Psi_{T}^{old}} = +\frac{\Psi_{OB}^{new}}{\Psi_{OB}^{old}}\frac{\Psi_{C}^{new}}{\Psi_{C}^{old}} $$ -

    with the matrix \( M \) given by

    - +

    Here \( \Psi_{OB} \) is our onebody part (Slater determinant or product of boson single-particle states) while \( \Psi_{C} \) is our correlation function, or Jastrow factor. +We need to optimize the \( \nabla \Psi_T / \Psi_T \) ratio and the second derivative as well, that is +the \( \mathbf{\nabla}^2 \Psi_T/\Psi_T \) ratio. The first is needed when we compute the so-called quantum force in importance sampling. +The second is needed when we compute the kinetic energy term of the local energy. +

    $$ -\begin{equation} -M_{ij} = \delta_{ij}\left [ 1 -\sum_k T_{i\rightarrow k} A_{i \rightarrow k} -\right ] + T_{j\rightarrow i} A_{j\rightarrow i} \,. -\tag{7} -\end{equation} +\frac{\mathbf{\mathbf{\nabla}} \Psi}{\Psi} = \frac{\mathbf{\nabla} (\Psi_{OB} \, \Psi_{C})}{\Psi_{OB} \, \Psi_{C}} = \frac{ \Psi_C \mathbf{\nabla} \Psi_{OB} + \Psi_{OB} \mathbf{\nabla} \Psi_{C}}{\Psi_{OB} \Psi_{C}} = \frac{\mathbf{\nabla} \Psi_{OB}}{\Psi_{OB}} + \frac{\mathbf{\nabla} \Psi_C}{ \Psi_C} $$ +
    +
    -

    Summing over \( i \) shows that \( \sum_i M_{ij} = 1 \), and since -\( \sum_k T_{i\rightarrow k} = 1 \), and \( A_{i \rightarrow k} \leq 1 \), the -elements of the matrix satisfy \( M_{ij} \geq 0 \). The matrix \( M \) is therefore -a stochastic matrix. -

    @@ -200,7 +319,7 @@

    Dynamical Equation

  • 18
  • 19
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs010.html b/doc/pub/week3/html/._week3-bs010.html index 16e5f12b..1d71147f 100644 --- a/doc/pub/week3/html/._week3-bs010.html +++ b/doc/pub/week3/html/._week3-bs010.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,37 +273,21 @@

     

     

     

    -

    Interpreting the Metropolis Algorithm

    - -

    The Metropolis method is simply the power method for computing the -right eigenvector of \( M \) with the largest magnitude eigenvalue. -By construction, the correct probability distribution is a right eigenvector -with eigenvalue 1. Therefore, for the Metropolis method to converge -to this result, we must show that \( M \) has only one eigenvalue with this -magnitude, and all other eigenvalues are smaller. -

    - -

    Even a defective matrix has at least one left and right eigenvector for -each eigenvalue. An example of a defective matrix is -

    - +

    Importance sampling

    +
    +
    + +

    The expectation value of the kinetic energy expressed in atomic units for electron \( i \) is

    $$ -\begin{bmatrix} -0 & 1\\ -0 & 0 \\ -\end{bmatrix}, + \langle \hat{K}_i \rangle = -\frac{1}{2}\frac{\langle\Psi|\mathbf{\nabla}_{i}^2|\Psi \rangle}{\langle\Psi|\Psi \rangle}, $$ -

    with two zero eigenvalues, only one right eigenvector

    - $$ -\begin{bmatrix} -1 \\ -0\\ -\end{bmatrix} +\hat{K}_i = -\frac{1}{2}\frac{\mathbf{\nabla}_{i}^{2} \Psi}{\Psi}. $$ +
    +
    -

    and only one left eigenvector \( (0\ 1) \).

    @@ -206,7 +314,7 @@

    Interpreting the M
  • 19
  • 20
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs011.html b/doc/pub/week3/html/._week3-bs011.html index dde1420d..923163f7 100644 --- a/doc/pub/week3/html/._week3-bs011.html +++ b/doc/pub/week3/html/._week3-bs011.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,21 +273,18 @@

     

     

     

    -

    Gershgorin bounds and Metropolis

    - -

    The Gershgorin bounds for the eigenvalues can be derived by multiplying on -the left with the eigenvector with the maximum and minimum eigenvalues, -

    - +

    Importance sampling

    +
    +
    + +

    The second derivative which enters the definition of the local energy is

    $$ -\begin{align} -\sum_i \psi^{\rm max}_i M_{ij} =& \lambda_{\rm max} \psi^{\rm max}_j -\nonumber\\ -\sum_i \psi^{\rm min}_i M_{ij} =& \lambda_{\rm min} \psi^{\rm min}_j -\tag{8} -\end{align} +\frac{\mathbf{\nabla}^2 \Psi}{\Psi}=\frac{\mathbf{\nabla}^2 \Psi_{OB}}{\Psi_{OB}} + \frac{\mathbf{\nabla}^2 \Psi_C}{ \Psi_C} + 2 \frac{\mathbf{\nabla} \Psi_{OB}}{\Psi_{OB}}\cdot\frac{\mathbf{\nabla} \Psi_C}{ \Psi_C} $$ +

    We discuss here how to calculate these quantities in an optimal way,

    +
    +

    @@ -190,7 +311,7 @@

    Gershgorin bounds and M
  • 20
  • 21
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs012.html b/doc/pub/week3/html/._week3-bs012.html index 373d9e3a..3a486bff 100644 --- a/doc/pub/week3/html/._week3-bs012.html +++ b/doc/pub/week3/html/._week3-bs012.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,30 +273,27 @@

     

     

     

    -

    Normalizing the Eigenvectors

    +

    Importance sampling

    +
    +
    + +

    We have defined the correlated function as

    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})=\prod_{i < j}^Ng(r_{ij})= \prod_{i=1}^N\prod_{j=i+1}^Ng(r_{ij}), +$$ -

    Next we choose the normalization of these eigenvectors so that the -largest element (or one of the equally largest elements) -has value 1. Let's call this element \( k \), and -we can therefore bound the magnitude of the other elements to be less -than or equal to 1. -This leads to the inequalities, using the property that \( M_{ij}\geq 0 \), +

    with +\( r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2+(z_i-z_j)^2} \) in three dimensions or +\( r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2} \) if we work with two-dimensional systems.

    +

    In our particular case we have

    $$ -\begin{eqnarray} -\sum_i M_{ik} \leq \lambda_{\rm max} -\nonumber\\ -M_{kk}-\sum_{i \neq k} M_{ik} \geq \lambda_{\rm min} -\end{eqnarray} +\Psi_C=\prod_{i < j}g(r_{ij})=\exp{\left\{\sum_{i < j}f(r_{ij})\right\}}. $$ +
    +
    -

    where the equality from the maximum -will occur only if the eigenvector takes the value 1 for all values of -\( i \) where \( M_{ik} \neq 0 \), and the equality for the minimum will -occur only if the eigenvector takes the value -1 for all values of \( i\neq k \) -where \( M_{ik} \neq 0 \). -

    @@ -199,7 +320,7 @@

    Normalizing the Eigenvector
  • 21
  • 22
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs013.html b/doc/pub/week3/html/._week3-bs013.html index 84225a55..871395ce 100644 --- a/doc/pub/week3/html/._week3-bs013.html +++ b/doc/pub/week3/html/._week3-bs013.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,35 +273,26 @@

     

     

     

    -

    More Metropolis analysis

    +

    Importance sampling

    +
    +
    + +

    The total number of different relative distances \( r_{ij} \) is \( N(N-1)/2 \). In a matrix storage format, the relative distances form a strictly upper triangular matrix

    +$$ + \mathbf{r} \equiv \begin{pmatrix} + 0 & r_{1,2} & r_{1,3} & \cdots & r_{1,N} \\ + \vdots & 0 & r_{2,3} & \cdots & r_{2,N} \\ + \vdots & \vdots & 0 & \ddots & \vdots \\ + \vdots & \vdots & \vdots & \ddots & r_{N-1,N} \\ + 0 & 0 & 0 & \cdots & 0 + \end{pmatrix}. +$$ -

    That the maximum eigenvalue is 1 follows immediately from the property -that \( \sum_i M_{ik} = 1 \). Similarly the minimum eigenvalue can be -1, -but only if \( M_{kk} = 0 \) and the magnitude of all the other elements -\( \psi_i^{\rm min} \) of -the eigenvector that multiply nonzero elements \( M_{ik} \) are -1. -

    +

    This applies to \( \mathbf{g} = \mathbf{g}(r_{ij}) \) as well.

    -

    Let's first see what the properties of \( M \) must be -to eliminate any -1 eigenvalues. -To have a -1 eigenvalue, the left eigenvector must contain only \( \pm 1 \) -and \( 0 \) values. Taking in turn each \( \pm 1 \) value as the maximum, so that -it corresponds to the index \( k \), the nonzero \( M_{ik} \) values must -correspond to \( i \) index values of the eigenvector which have opposite -sign elements. That is, the \( M \) matrix must break up into sets of -states that always make transitions from set A to set B ... back to set A. -In particular, there can be no rejections of these moves in the cycle -since the -1 eigenvalue requires \( M_{kk}=0 \). To guarantee no eigenvalues -with eigenvalue -1, we simply have to make sure that there are no -cycles among states. Notice that this is generally trivial since such -cycles cannot have any rejections at any stage. An example of such -a cycle is sampling a noninteracting Ising spin. If the transition is -taken to flip the spin, and the energy difference is zero, the Boltzmann -factor will not change and the move will always be accepted. The system -will simply flip from up to down to up to down ad infinitum. Including -a rejection probability or using a heat bath algorithm -immediately fixes the problem. -

    +

    In our algorithm we will move one particle at the time, say the \( kth \)-particle. This sampling will be seen to be particularly efficient when we are going to compute a Slater determinant.

    +
    +

    @@ -203,6 +318,8 @@

    More Metropolis analysis

  • 21
  • 22
  • 23
  • +
  • ...
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs014.html b/doc/pub/week3/html/._week3-bs014.html index 0d22be15..4a53b98f 100644 --- a/doc/pub/week3/html/._week3-bs014.html +++ b/doc/pub/week3/html/._week3-bs014.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,24 +273,33 @@

     

     

     

    -

    Final Considerations I

    +

    Importance sampling

    +
    +
    + +

    We have that the ratio between Jastrow factors \( R_C \) is given by

    +$$ +R_{C} = \frac{\Psi_{C}^\mathrm{new}}{\Psi_{C}^\mathrm{cur}} = +\prod_{i=1}^{k-1}\frac{g_{ik}^\mathrm{new}}{g_{ik}^\mathrm{cur}} +\prod_{i=k+1}^{N}\frac{ g_{ki}^\mathrm{new}} {g_{ki}^\mathrm{cur}}. +$$ + +

    For the Pade-Jastrow form

    +$$ + R_{C} = \frac{\Psi_{C}^\mathrm{new}}{\Psi_{C}^\mathrm{cur}} = +\frac{\exp{U_{new}}}{\exp{U_{cur}}} = \exp{\Delta U}, +$$ + +

    where

    +$$ +\Delta U = +\sum_{i=1}^{k-1}\big(f_{ik}^\mathrm{new}-f_{ik}^\mathrm{cur}\big) ++ +\sum_{i=k+1}^{N}\big(f_{ki}^\mathrm{new}-f_{ki}^\mathrm{cur}\big) +$$ +
    +
    -

    Next we need to make sure that there is only one left eigenvector -with eigenvalue 1. To get an eigenvalue 1, the left eigenvector must be -constructed from only ones and zeroes. It is straightforward to -see that a vector made up of -ones and zeroes can only be an eigenvector with eigenvalue 1 if the -matrix element \( M_{ij} = 0 \) for all cases where \( \psi_i \neq \psi_j \). -That is we can choose an index \( i \) and take \( \psi_i = 1 \). -We require all elements \( \psi_j \) where \( M_{ij} \neq 0 \) to also have -the value \( 1 \). Continuing we then require all elements \( \psi_\ell \) $M_{j\ell}$ -to have value \( 1 \). Only if the matrix \( M \) can be put into block diagonal -form can there be more than one choice for the left eigenvector with -eigenvalue 1. We therefore require that the transition matrix not -be in block diagonal form. This simply means that we must choose -the transition probability so that we can get from any allowed state -to any other in a series of transitions. -

    @@ -191,6 +324,9 @@

    Final Considerations I

  • 21
  • 22
  • 23
  • +
  • 24
  • +
  • ...
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs015.html b/doc/pub/week3/html/._week3-bs015.html index d3b82f06..9b095f02 100644 --- a/doc/pub/week3/html/._week3-bs015.html +++ b/doc/pub/week3/html/._week3-bs015.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,22 +273,24 @@

     

     

     

    -

    Final Considerations II

    - -

    Finally, we note that for a defective matrix, with more eigenvalues -than independent eigenvectors for eigenvalue 1, -the left and right -eigenvectors of eigenvalue 1 would be orthogonal. -Here the left eigenvector is all 1 -except for states that can never be reached, and the right eigenvector -is \( p_i > 0 \) except for states that give zero probability. We already -require that we can reach -all states that contribute to \( p_i \). Therefore the left and right -eigenvectors with eigenvalue 1 do not correspond to a defective sector -of the matrix and they are unique. The Metropolis algorithm therefore -converges exponentially to the desired distribution. +

    Importance sampling

    +
    +
    + +

    One needs to develop a special algorithm +that runs only through the elements of the upper triangular +matrix \( \mathbf{g} \) and have \( k \) as an index.

    +

    The expression to be derived in the following is of interest when computing the quantum force and the kinetic energy. It has the form

    +$$ +\frac{\mathbf{\nabla}_i\Psi_C}{\Psi_C} = \frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_i}, +$$ + +

    for all dimensions and with \( i \) running over all particles.

    +
    +
    +

    diff --git a/doc/pub/week3/html/._week3-bs016.html b/doc/pub/week3/html/._week3-bs016.html index 540e7de2..37ff6430 100644 --- a/doc/pub/week3/html/._week3-bs016.html +++ b/doc/pub/week3/html/._week3-bs016.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,13 +273,31 @@

     

     

     

    -

    Final Considerations III

    +

    Importance sampling

    +
    +
    + +

    For the first derivative only \( N-1 \) terms survive the ratio because the \( g \)-terms that are not differentiated cancel with their corresponding ones in the denominator. Then,

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\partial g_{ki}}{\partial x_k}. +$$ + +

    An equivalent equation is obtained for the exponential form after replacing \( g_{ij} \) by \( \exp(f_{ij}) \), yielding:

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k}, +$$ + +

    with both expressions scaling as \( \mathcal{O}(N) \).

    +
    +
    + -

    The requirements for the transition \( T_{i \rightarrow j} \) are

    -

    diff --git a/doc/pub/week3/html/._week3-bs017.html b/doc/pub/week3/html/._week3-bs017.html index 7e28ebb7..f8efa40e 100644 --- a/doc/pub/week3/html/._week3-bs017.html +++ b/doc/pub/week3/html/._week3-bs017.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,20 +273,29 @@

     

     

     

    -

    Importance Sampling: Overview of what needs to be coded

    - +

    Importance sampling

    -

    For a diffusion process characterized by a time-dependent probability density \( P(x,t) \) in one dimension the Fokker-Planck -equation reads (for one particle /walker) -

    +

    Using the identity

    $$ - \frac{\partial P}{\partial t} = D\frac{\partial }{\partial x}\left(\frac{\partial }{\partial x} -F\right)P(x,t), +\frac{\partial}{\partial x_i}g_{ij} = -\frac{\partial}{\partial x_j}g_{ij}, $$ -

    where \( F \) is a drift term and \( D \) is the diffusion coefficient.

    +

    we get expressions where all the derivatives acting on the particle are represented by the second index of \( g \):

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}}\frac{\partial g_{ik}}{\partial x_k} +-\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\partial g_{ki}}{\partial x_i}, +$$ + +

    and for the exponential case:

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} +-\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i}. +$$
    @@ -187,6 +320,12 @@

    I
  • 21
  • 22
  • 23
  • +
  • 24
  • +
  • 25
  • +
  • 26
  • +
  • 27
  • +
  • ...
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs018.html b/doc/pub/week3/html/._week3-bs018.html index 447162fa..d218b689 100644 --- a/doc/pub/week3/html/._week3-bs018.html +++ b/doc/pub/week3/html/._week3-bs018.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -153,23 +277,17 @@

    Importance sampling

    -

    The new positions in coordinate space are given as the solutions of the Langevin equation using Euler's method, namely, -we go from the Langevin equation -

    -$$ - \frac{\partial x(t)}{\partial t} = DF(x(t)) +\eta, +

    For correlation forms depending only on the scalar distances \( r_{ij} \) we can use the chain rule. Noting that

    +$$ +\frac{\partial g_{ij}}{\partial x_j} = \frac{\partial g_{ij}}{\partial r_{ij}} \frac{\partial r_{ij}}{\partial x_j} = \frac{x_j - x_i}{r_{ij}} \frac{\partial g_{ij}}{\partial r_{ij}}, $$ -

    with \( \eta \) a random variable, -yielding a new position -

    +

    we arrive at

    $$ - y = x+DF(x)\Delta t +\xi\sqrt{\Delta t}, +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}} \frac{\mathbf{r_{ik}}}{r_{ik}} \frac{\partial g_{ik}}{\partial r_{ik}} +-\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\mathbf{r_{ki}}}{r_{ki}}\frac{\partial g_{ki}}{\partial r_{ki}}. $$ - -

    where \( \xi \) is gaussian random variable and \( \Delta t \) is a chosen time step. -The quantity \( D \) is, in atomic units, equal to \( 1/2 \) and comes from the factor \( 1/2 \) in the kinetic energy operator. Note that \( \Delta t \) is to be viewed as a parameter. Values of \( \Delta t \in [0.001,0.01] \) yield in general rather stable values of the ground state energy. -

    @@ -193,6 +311,13 @@

    Importance sampling

  • 21
  • 22
  • 23
  • +
  • 24
  • +
  • 25
  • +
  • 26
  • +
  • 27
  • +
  • 28
  • +
  • ...
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs019.html b/doc/pub/week3/html/._week3-bs019.html index 5242ec11..58b12125 100644 --- a/doc/pub/week3/html/._week3-bs019.html +++ b/doc/pub/week3/html/._week3-bs019.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -153,15 +277,24 @@

    Importance sampling

    -

    The process of isotropic diffusion characterized by a time-dependent probability density \( P(\mathbf{x},t) \) obeys (as an approximation) the so-called Fokker-Planck equation

    +

    Note that for the Pade-Jastrow form we can set \( g_{ij} \equiv g(r_{ij}) = e^{f(r_{ij})} = e^{f_{ij}} \) and

    $$ - \frac{\partial P}{\partial t} = \sum_i D\frac{\partial }{\partial \mathbf{x_i}}\left(\frac{\partial }{\partial \mathbf{x_i}} -\mathbf{F_i}\right)P(\mathbf{x},t), +\frac{\partial g_{ij}}{\partial r_{ij}} = g_{ij} \frac{\partial f_{ij}}{\partial r_{ij}}. $$ -

    where \( \mathbf{F_i} \) is the \( i^{th} \) component of the drift term (drift velocity) caused by an external potential, and \( D \) is the diffusion coefficient. The convergence to a stationary probability density can be obtained by setting the left hand side to zero. The resulting equation will be satisfied if and only if all the terms of the sum are equal zero,

    +

    Therefore,

    $$ -\frac{\partial^2 P}{\partial {\mathbf{x_i}^2}} = P\frac{\partial}{\partial {\mathbf{x_i}}}\mathbf{F_i} + \mathbf{F_i}\frac{\partial}{\partial {\mathbf{x_i}}}P. +\frac{1}{\Psi_{C}}\frac{\partial \Psi_{C}}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\mathbf{r_{ik}}}{r_{ik}}\frac{\partial f_{ik}}{\partial r_{ik}} +-\sum_{i=k+1}^{N}\frac{\mathbf{r_{ki}}}{r_{ki}}\frac{\partial f_{ki}}{\partial r_{ki}}, $$ + +

    where

    +$$ + \mathbf{r}_{ij} = |\mathbf{r}_j - \mathbf{r}_i| = (x_j - x_i)\mathbf{e}_1 + (y_j - y_i)\mathbf{e}_2 + (z_j - z_i)\mathbf{e}_3 +$$ + +

    is the relative distance.

    @@ -184,6 +317,14 @@

    Importance sampling

  • 21
  • 22
  • 23
  • +
  • 24
  • +
  • 25
  • +
  • 26
  • +
  • 27
  • +
  • 28
  • +
  • 29
  • +
  • ...
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs020.html b/doc/pub/week3/html/._week3-bs020.html index d664fa6b..e4c8d802 100644 --- a/doc/pub/week3/html/._week3-bs020.html +++ b/doc/pub/week3/html/._week3-bs020.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -153,17 +277,17 @@

    Importance sampling

    -

    The drift vector should be of the form \( \mathbf{F} = g(\mathbf{x}) \frac{\partial P}{\partial \mathbf{x}} \). Then,

    +

    The second derivative of the Jastrow factor divided by the Jastrow factor (the way it enters the kinetic energy) is

    $$ -\frac{\partial^2 P}{\partial {\mathbf{x_i}^2}} = P\frac{\partial g}{\partial P}\left( \frac{\partial P}{\partial {\mathbf{x}_i}} \right)^2 + P g \frac{\partial ^2 P}{\partial {\mathbf{x}_i^2}} + g \left( \frac{\partial P}{\partial {\mathbf{x}_i}} \right)^2. +\left[\frac{\mathbf{\nabla}^2 \Psi_C}{\Psi_C}\right]_x =\ +2\sum_{k=1}^{N} +\sum_{i=1}^{k-1}\frac{\partial^2 g_{ik}}{\partial x_k^2}\ +\ +\sum_{k=1}^N +\left( +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} - +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i} +\right)^2 $$ - -

    The condition of stationary density means that the left hand side equals zero. In other words, the terms containing first and second derivatives have to cancel each other. It is possible only if \( g = \frac{1}{P} \), which yields

    -$$ -\mathbf{F} = 2\frac{1}{\Psi_T}\nabla\Psi_T, -$$ - -

    which is known as the so-called quantum force. This term is responsible for pushing the walker towards regions of configuration space where the trial wave function is large, increasing the efficiency of the simulation in contrast to the Metropolis algorithm where the walker has the same probability of moving in every direction.

    @@ -185,6 +309,15 @@

    Importance sampling

  • 21
  • 22
  • 23
  • +
  • 24
  • +
  • 25
  • +
  • 26
  • +
  • 27
  • +
  • 28
  • +
  • 29
  • +
  • 30
  • +
  • ...
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs021.html b/doc/pub/week3/html/._week3-bs021.html index 06012627..7a53d2c6 100644 --- a/doc/pub/week3/html/._week3-bs021.html +++ b/doc/pub/week3/html/._week3-bs021.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -153,19 +277,18 @@

    Importance sampling

    -

    The Fokker-Planck equation yields a (the solution to the equation) transition probability given by the Green's function

    +

    But we have a simple form for the function, namely

    $$ - G(y,x,\Delta t) = \frac{1}{(4\pi D\Delta t)^{3N/2}} \exp{\left(-(y-x-D\Delta t F(x))^2/4D\Delta t\right)} +\Psi_{C}=\prod_{i < j}\exp{f(r_{ij})}, $$ -

    which in turn means that our brute force Metropolis algorithm

    -$$ - A(y,x) = \mathrm{min}(1,q(y,x))), +

    and it is easy to see that for particle \( k \) +we have +

    $$ - -

    with \( q(y,x) = |\Psi_T(y)|^2/|\Psi_T(x)|^2 \) is now replaced by the Metropolis-Hastings algorithm as well as Hasting's article,

    -$$ -q(y,x) = \frac{G(x,y,\Delta t)|\Psi_T(y)|^2}{G(y,x,\Delta t)|\Psi_T(x)|^2} + \frac{\mathbf{\nabla}^2_k \Psi_C}{\Psi_C }= +\sum_{ij\ne k}\frac{(\mathbf{r}_k-\mathbf{r}_i)(\mathbf{r}_k-\mathbf{r}_j)}{r_{ki}r_{kj}}f'(r_{ki})f'(r_{kj})+ +\sum_{j\ne k}\left( f''(r_{kj})+\frac{2}{r_{kj}}f'(r_{kj})\right) $$
    @@ -187,6 +310,16 @@

    Importance sampling

  • 21
  • 22
  • 23
  • +
  • 24
  • +
  • 25
  • +
  • 26
  • +
  • 27
  • +
  • 28
  • +
  • 29
  • +
  • 30
  • +
  • 31
  • +
  • ...
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs022.html b/doc/pub/week3/html/._week3-bs022.html index ea4fc2c0..3fa3e00f 100644 --- a/doc/pub/week3/html/._week3-bs022.html +++ b/doc/pub/week3/html/._week3-bs022.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -149,267 +273,25 @@

     

     

     

    -

    Code example for the interacting case with importance sampling

    - -

    We are now ready to implement importance sampling. This is done here for the two-electron case with the Coulomb interaction, as in the previous example. We have two variational parameters \( \alpha \) and \( \beta \). After the set up of files

    - - - -
    -
    -
    -
    -
    -
    # Common imports
    -import os
    -
    -# Where to save the figures and data files
    -PROJECT_ROOT_DIR = "Results"
    -FIGURE_ID = "Results/FigureFiles"
    -DATA_ID = "Results/VMCQdotImportance"
    -
    -if not os.path.exists(PROJECT_ROOT_DIR):
    -    os.mkdir(PROJECT_ROOT_DIR)
    -
    -if not os.path.exists(FIGURE_ID):
    -    os.makedirs(FIGURE_ID)
    -
    -if not os.path.exists(DATA_ID):
    -    os.makedirs(DATA_ID)
    -
    -def image_path(fig_id):
    -    return os.path.join(FIGURE_ID, fig_id)
    -
    -def data_path(dat_id):
    -    return os.path.join(DATA_ID, dat_id)
    -
    -def save_fig(fig_id):
    -    plt.savefig(image_path(fig_id) + ".png", format='png')
    -
    -outfile = open(data_path("VMCQdotImportance.dat"),'w')
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - -

    we move on to the set up of the trial wave function, the analytical expression for the local energy and the analytical expression for the quantum force.

    - - -
    -
    -
    -
    -
    -
    # 2-electron VMC code for 2dim quantum dot with importance sampling
    -# Using gaussian rng for new positions and Metropolis- Hastings 
    -# No energy minimization
    -from math import exp, sqrt
    -from random import random, seed, normalvariate
    -import numpy as np
    -import matplotlib.pyplot as plt
    -from mpl_toolkits.mplot3d import Axes3D
    -from matplotlib import cm
    -from matplotlib.ticker import LinearLocator, FormatStrFormatter
    -import sys
    -from numba import jit,njit
    -
    -
    -# Trial wave function for the 2-electron quantum dot in two dims
    -def WaveFunction(r,alpha,beta):
    -    r1 = r[0,0]**2 + r[0,1]**2
    -    r2 = r[1,0]**2 + r[1,1]**2
    -    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    -    deno = r12/(1+beta*r12)
    -    return exp(-0.5*alpha*(r1+r2)+deno)
    -
    -# Local energy  for the 2-electron quantum dot in two dims, using analytical local energy
    -def LocalEnergy(r,alpha,beta):
    -    
    -    r1 = (r[0,0]**2 + r[0,1]**2)
    -    r2 = (r[1,0]**2 + r[1,1]**2)
    -    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    -    deno = 1.0/(1+beta*r12)
    -    deno2 = deno*deno
    -    return 0.5*(1-alpha*alpha)*(r1 + r2) +2.0*alpha + 1.0/r12+deno2*(alpha*r12-deno2+2*beta*deno-1.0/r12)
    -
    -# Setting up the quantum force for the two-electron quantum dot, recall that it is a vector
    -def QuantumForce(r,alpha,beta):
    -
    -    qforce = np.zeros((NumberParticles,Dimension), np.double)
    -    r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)
    -    deno = 1.0/(1+beta*r12)
    -    qforce[0,:] = -2*r[0,:]*alpha*(r[0,:]-r[1,:])*deno*deno/r12
    -    qforce[1,:] = -2*r[1,:]*alpha*(r[1,:]-r[0,:])*deno*deno/r12
    -    return qforce
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - -

    The Monte Carlo sampling includes now the Metropolis-Hastings algorithm, with the additional complication of having to evaluate the quantum force and the Green's function which is the solution of the Fokker-Planck equation.

    - - - -
    -
    -
    -
    -
    -
    # The Monte Carlo sampling with the Metropolis algo
    -# jit decorator tells Numba to compile this function.
    -# The argument types will be inferred by Numba when function is called.
    -@jit()
    -def MonteCarloSampling():
    -
    -    NumberMCcycles= 100000
    -    # Parameters in the Fokker-Planck simulation of the quantum force
    -    D = 0.5
    -    TimeStep = 0.05
    -    # positions
    -    PositionOld = np.zeros((NumberParticles,Dimension), np.double)
    -    PositionNew = np.zeros((NumberParticles,Dimension), np.double)
    -    # Quantum force
    -    QuantumForceOld = np.zeros((NumberParticles,Dimension), np.double)
    -    QuantumForceNew = np.zeros((NumberParticles,Dimension), np.double)
    -
    -    # seed for rng generator 
    -    seed()
    -    # start variational parameter  loops, two parameters here
    -    alpha = 0.9
    -    for ia in range(MaxVariations):
    -        alpha += .025
    -        AlphaValues[ia] = alpha
    -        beta = 0.2 
    -        for jb in range(MaxVariations):
    -            beta += .01
    -            BetaValues[jb] = beta
    -            energy = energy2 = 0.0
    -            DeltaE = 0.0
    -            #Initial position
    -            for i in range(NumberParticles):
    -                for j in range(Dimension):
    -                    PositionOld[i,j] = normalvariate(0.0,1.0)*sqrt(TimeStep)
    -            wfold = WaveFunction(PositionOld,alpha,beta)
    -            QuantumForceOld = QuantumForce(PositionOld,alpha, beta)
    -
    -            #Loop over MC MCcycles
    -            for MCcycle in range(NumberMCcycles):
    -                #Trial position moving one particle at the time
    -                for i in range(NumberParticles):
    -                    for j in range(Dimension):
    -                        PositionNew[i,j] = PositionOld[i,j]+normalvariate(0.0,1.0)*sqrt(TimeStep)+\
    -                                           QuantumForceOld[i,j]*TimeStep*D
    -                    wfnew = WaveFunction(PositionNew,alpha,beta)
    -                    QuantumForceNew = QuantumForce(PositionNew,alpha, beta)
    -                    GreensFunction = 0.0
    -                    for j in range(Dimension):
    -                        GreensFunction += 0.5*(QuantumForceOld[i,j]+QuantumForceNew[i,j])*\
    -	                              (D*TimeStep*0.5*(QuantumForceOld[i,j]-QuantumForceNew[i,j])-\
    -                                      PositionNew[i,j]+PositionOld[i,j])
    -      
    -                    GreensFunction = exp(GreensFunction)
    -                    ProbabilityRatio = GreensFunction*wfnew**2/wfold**2
    -                    #Metropolis-Hastings test to see whether we accept the move
    -                    if random() <= ProbabilityRatio:
    -                       for j in range(Dimension):
    -                           PositionOld[i,j] = PositionNew[i,j]
    -                           QuantumForceOld[i,j] = QuantumForceNew[i,j]
    -                       wfold = wfnew
    -                DeltaE = LocalEnergy(PositionOld,alpha,beta)
    -                energy += DeltaE
    -                energy2 += DeltaE**2
    -            # We calculate mean, variance and error (no blocking applied)
    -            energy /= NumberMCcycles
    -            energy2 /= NumberMCcycles
    -            variance = energy2 - energy**2
    -            error = sqrt(variance/NumberMCcycles)
    -            Energies[ia,jb] = energy    
    -            outfile.write('%f %f %f %f %f\n' %(alpha,beta,energy,variance,error))
    -    return Energies, AlphaValues, BetaValues
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - -

    The main part here contains the setup of the variational parameters, the energies and the variance.

    - - -
    -
    -
    -
    -
    -
    #Here starts the main program with variable declarations
    -NumberParticles = 2
    -Dimension = 2
    -MaxVariations = 10
    -Energies = np.zeros((MaxVariations,MaxVariations))
    -AlphaValues = np.zeros(MaxVariations)
    -BetaValues = np.zeros(MaxVariations)
    -(Energies, AlphaValues, BetaValues) = MonteCarloSampling()
    -outfile.close()
    -# Prepare for plots
    -fig = plt.figure()
    -ax = fig.gca(projection='3d')
    -# Plot the surface.
    -X, Y = np.meshgrid(AlphaValues, BetaValues)
    -surf = ax.plot_surface(X, Y, Energies,cmap=cm.coolwarm,linewidth=0, antialiased=False)
    -# Customize the z axis.
    -zmin = np.matrix(Energies).min()
    -zmax = np.matrix(Energies).max()
    -ax.set_zlim(zmin, zmax)
    -ax.set_xlabel(r'$\alpha$')
    -ax.set_ylabel(r'$\beta$')
    -ax.set_zlabel(r'$\langle E \rangle$')
    -ax.zaxis.set_major_locator(LinearLocator(10))
    -ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
    -# Add a color bar which maps values to colors.
    -fig.colorbar(surf, shrink=0.5, aspect=5)
    -save_fig("QdotImportance")
    -plt.show()
    -
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +
    + +

    A stochastic process is simply a function of two variables, one is the time, +the other is a stochastic variable \( X \), defined by specifying +

    +
      +
    • the set \( \left\{x\right\} \) of possible values for \( X \);
    • +
    • the probability distribution, \( w_X(x) \), over this set, or briefly \( w(x) \)
    • +
    +

    The set of values \( \left\{x\right\} \) for \( X \) +may be discrete, or continuous. If the set of +values is continuous, then \( w_X (x) \) is a probability density so that +\( w_X (x)dx \) +is the probability that one finds the stochastic variable \( X \) to have values +in the range \( [x, x + dx] \) . +

    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    @@ -428,6 +310,18 @@

    21
  • 22
  • 23
  • +
  • 24
  • +
  • 25
  • +
  • 26
  • +
  • 27
  • +
  • 28
  • +
  • 29
  • +
  • 30
  • +
  • 31
  • +
  • 32
  • +
  • ...
  • +
  • 49
  • +
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs023.html b/doc/pub/week3/html/._week3-bs023.html index 2c71ca3d..a21e5fab 100644 --- a/doc/pub/week3/html/._week3-bs023.html +++ b/doc/pub/week3/html/._week3-bs023.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,57 +273,36 @@

     

     

     

    -

    Importance sampling, program elements in C++

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    The full code is this link. Here we include only the parts pertaining to the computation of the quantum force and the Metropolis update. The program is a modfication of our previous c++ program discussed previously. Here we display only the part from the vmcsolver.cpp file. Note the usage of the function GaussianDeviate.

    - - -
    -
    -
    -
    -
    -
    void VMCSolver::runMonteCarloIntegration()
    -{
    -  rOld = zeros<mat>(nParticles, nDimensions);
    -  rNew = zeros<mat>(nParticles, nDimensions);
    -  QForceOld = zeros<mat>(nParticles, nDimensions);
    -  QForceNew = zeros<mat>(nParticles, nDimensions);
    -
    -  double waveFunctionOld = 0;
    -  double waveFunctionNew = 0;
    -
    -  double energySum = 0;
    -  double energySquaredSum = 0;
    +

    An arbitrary number of other stochastic variables may be derived from +\( X \). For example, any \( Y \) given by a mapping of \( X \), is also a stochastic +variable. The mapping may also be time-dependent, that is, the mapping +depends on an additional variable \( t \) +

    +$$ + Y_X (t) = f (X, t) . +$$ - double deltaE; +

    The quantity \( Y_X (t) \) is called a random function, or, since \( t \) often is time, +a stochastic process. A stochastic process is a function of two variables, +one is the time, the other is a stochastic variable \( X \). Let \( x \) be one of the +possible values of \( X \) then +

    +$$ + y(t) = f (x, t), +$$ - // initial trial positions - for(int i = 0; i < nParticles; i++) { - for(int j = 0; j < nDimensions; j++) { - rOld(i,j) = GaussianDeviate(&idum)*sqrt(timestep); - } - } - rNew = rOld; -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    +

    is a function of \( t \), called a sample function or realization of the process. +In physics one considers the stochastic process to be an ensemble of such +sample functions. +

    +

    diff --git a/doc/pub/week3/html/._week3-bs024.html b/doc/pub/week3/html/._week3-bs024.html index a8b842c1..efe14835 100644 --- a/doc/pub/week3/html/._week3-bs024.html +++ b/doc/pub/week3/html/._week3-bs024.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,50 +273,29 @@

     

     

     

    -

    Importance sampling, program elements

    +

    Importance sampling, Fokker-Planck and Langevin equations

    +

    For many physical systems initial distributions of a stochastic +variable \( y \) tend to equilibrium distributions: \( w(y, t)\rightarrow w_0(y) \) +as \( t\rightarrow\infty \). In +equilibrium detailed balance constrains the transition rates +

    +$$ + W(y\rightarrow y')w(y ) = W(y'\rightarrow y)w_0 (y), +$$ - -
    -
    -
    -
    -
    -
      for(int cycle = 0; cycle < nCycles; cycle++) {
    +

    where \( W(y'\rightarrow y) \) +is the probability, per unit time, that the system changes +from a state \( |y\rangle \) , characterized by the value \( y \) +for the stochastic variable \( Y \) , to a state \( |y'\rangle \). +

    - // Store the current value of the wave function - waveFunctionOld = waveFunction(rOld); - QuantumForce(rOld, QForceOld); QForceOld = QForceOld*h/waveFunctionOld; - // New position to test - for(int i = 0; i < nParticles; i++) { - for(int j = 0; j < nDimensions; j++) { - rNew(i,j) = rOld(i,j) + GaussianDeviate(&idum)*sqrt(timestep)+QForceOld(i,j)*timestep*D; - } - // for the other particles we need to set the position to the old position since - // we move only one particle at the time - for (int k = 0; k < nParticles; k++) { - if ( k != i) { - for (int j=0; j < nDimensions; j++) { - rNew(k,j) = rOld(k,j); - } - } - } -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    +

    Note that for a system in equilibrium the transition rate +\( W(y'\rightarrow y) \) and +the reverse \( W(y\rightarrow y') \) may be very different. +

    @@ -430,7 +325,7 @@

    Importance sampling
  • 33
  • 34
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs025.html b/doc/pub/week3/html/._week3-bs025.html index 1b2bb415..9fb312a8 100644 --- a/doc/pub/week3/html/._week3-bs025.html +++ b/doc/pub/week3/html/._week3-bs025.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,60 +273,30 @@

     

     

     

    -

    Importance sampling, program elements

    +

    Importance sampling, Fokker-Planck and Langevin equations

    +

    Consider, for instance, a simple +system that has only two energy levels \( \epsilon_0 = 0 \) and +\( \epsilon_1 = \Delta E \). +

    - -
    -
    -
    -
    -
    -
      // loop over Monte Carlo cycles
    -      // Recalculate the value of the wave function and the quantum force
    -      waveFunctionNew = waveFunction(rNew);
    -      QuantumForce(rNew,QForceNew) = QForceNew*h/waveFunctionNew;
    -      //  we compute the log of the ratio of the greens functions to be used in the 
    -      //  Metropolis-Hastings algorithm
    -      GreensFunction = 0.0;            
    -      for (int j=0; j < nDimensions; j++) {
    -	GreensFunction += 0.5*(QForceOld(i,j)+QForceNew(i,j))*
    -	  (D*timestep*0.5*(QForceOld(i,j)-QForceNew(i,j))-rNew(i,j)+rOld(i,j));
    -      }
    -      GreensFunction = exp(GreensFunction);
    +

    For a system governed by the Boltzmann distribution we find (the partition function has been taken out)

    +$$ + W(0\rightarrow 1)\exp{-(\epsilon_0/kT)} = W(1\rightarrow 0)\exp{-(\epsilon_1/kT)} +$$ - // The Metropolis test is performed by moving one particle at the time - if(ran2(&idum) <= GreensFunction*(waveFunctionNew*waveFunctionNew) / (waveFunctionOld*waveFunctionOld)) { - for(int j = 0; j < nDimensions; j++) { - rOld(i,j) = rNew(i,j); - QForceOld(i,j) = QForceNew(i,j); - waveFunctionOld = waveFunctionNew; - } - } else { - for(int j = 0; j < nDimensions; j++) { - rNew(i,j) = rOld(i,j); - QForceNew(i,j) = QForceOld(i,j); - } - } -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    +

    We get then

    +$$ + \frac{W(1\rightarrow 0)}{W(0 \rightarrow 1)}=\exp{-(\Delta E/kT)}, +$$ + +

    which goes to zero when \( T \) tends to zero.

    +

    diff --git a/doc/pub/week3/html/._week3-bs026.html b/doc/pub/week3/html/._week3-bs026.html index 1c5d2e56..4a8f3a88 100644 --- a/doc/pub/week3/html/._week3-bs026.html +++ b/doc/pub/week3/html/._week3-bs026.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,55 +273,32 @@

     

     

     

    -

    Importance sampling, program elements

    +

    Importance sampling, Fokker-Planck and Langevin equations

    +

    If we assume a discrete set of events, +our initial probability +distribution function can be given by +

    +$$ + w_i(0) = \delta_{i,0}, +$$ - -
    -
    -
    -
    -
    -
    double VMCSolver::QuantumForce(const mat &r, mat &QForce)
    -{
    -    mat rPlus = zeros<mat>(nParticles, nDimensions);
    -    mat rMinus = zeros<mat>(nParticles, nDimensions);
    -    rPlus = rMinus = r;
    -    double waveFunctionMinus = 0;
    -    double waveFunctionPlus = 0;
    -    double waveFunctionCurrent = waveFunction(r);
    +

    and its time-development after a given time step \( \Delta t=\epsilon \) is

    +$$ + w_i(t) = \sum_{j}W(j\rightarrow i)w_j(t=0). +$$ - // Kinetic energy +

    The continuous analog to \( w_i(0) \) is

    +$$ + w(\mathbf{x})\rightarrow \delta(\mathbf{x}), +$$ - double kineticEnergy = 0; - for(int i = 0; i < nParticles; i++) { - for(int j = 0; j < nDimensions; j++) { - rPlus(i,j) += h; - rMinus(i,j) -= h; - waveFunctionMinus = waveFunction(rMinus); - waveFunctionPlus = waveFunction(rPlus); - QForce(i,j) = (waveFunctionPlus-waveFunctionMinus); - rPlus(i,j) = r(i,j); - rMinus(i,j) = r(i,j); - } - } -} -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    +

    where we now have generalized the one-dimensional position \( x \) to a generic-dimensional +vector \( \mathbf{x} \). The Kroenecker \( \delta \) function is replaced by the \( \delta \) distribution +function \( \delta(\mathbf{x}) \) at \( t=0 \). +

    @@ -435,7 +328,7 @@

    Importance sampling
  • 35
  • 36
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs027.html b/doc/pub/week3/html/._week3-bs027.html index b11bf63a..dcb621c9 100644 --- a/doc/pub/week3/html/._week3-bs027.html +++ b/doc/pub/week3/html/._week3-bs027.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,28 +273,30 @@

     

     

     

    -

    Importance sampling, program elements

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    The general derivative formula of the Jastrow factor is (the subscript \( C \) stands for Correlation)

    +

    The transition from a state \( j \) to a state \( i \) is now replaced by a transition +to a state with position \( \mathbf{y} \) from a state with position \( \mathbf{x} \). +The discrete sum of transition probabilities can then be replaced by an integral +and we obtain the new distribution at a time \( t+\Delta t \) as +

    $$ -\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = -\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} -+ -\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k} + w(\mathbf{y},t+\Delta t)= \int W(\mathbf{y},t+\Delta t| \mathbf{x},t)w(\mathbf{x},t)d\mathbf{x}, $$ -

    However, -with our written in way which can be reused later as -

    +

    and after \( m \) time steps we have

    $$ -\Psi_C=\prod_{i < j}g(r_{ij})= \exp{\left\{\sum_{i < j}f(r_{ij})\right\}}, + w(\mathbf{y},t+m\Delta t)= \int W(\mathbf{y},t+m\Delta t| \mathbf{x},t)w(\mathbf{x},t)d\mathbf{x}. $$ -

    the gradient needed for the quantum force and local energy is easy to compute. -The function \( f(r_{ij}) \) will depends on the system under study. In the equations below we will keep this general form. -

    +

    When equilibrium is reached we have

    +$$ + w(\mathbf{y})= \int W(\mathbf{y}|\mathbf{x}, t)w(\mathbf{x})d\mathbf{x}, +$$ + +

    that is no time-dependence. Note our change of notation for \( W \)

    @@ -408,7 +326,7 @@

    Importance sampling
  • 36
  • 37
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs028.html b/doc/pub/week3/html/._week3-bs028.html index b61504df..671c308f 100644 --- a/doc/pub/week3/html/._week3-bs028.html +++ b/doc/pub/week3/html/._week3-bs028.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,23 +273,29 @@

     

     

     

    -

    Importance sampling, program elements

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    In the Metropolis/Hasting algorithm, the acceptance ratio determines the probability for a particle to be accepted at a new position. The ratio of the trial wave functions evaluated at the new and current positions is given by (\( OB \) for the onebody part)

    +

    We can solve the equation for \( w(\mathbf{y},t) \) by making a Fourier transform to +momentum space. +The PDF \( w(\mathbf{x},t) \) is related to its Fourier transform +\( \tilde{w}(\mathbf{k},t) \) through +

    $$ -R \equiv \frac{\Psi_{T}^{new}}{\Psi_{T}^{old}} = -\frac{\Psi_{OB}^{new}}{\Psi_{OB}^{old}}\frac{\Psi_{C}^{new}}{\Psi_{C}^{old}} + w(\mathbf{x},t) = \int_{-\infty}^{\infty}d\mathbf{k} \exp{(i\mathbf{kx})}\tilde{w}(\mathbf{k},t), $$ -

    Here \( \Psi_{OB} \) is our onebody part (Slater determinant or product of boson single-particle states) while \( \Psi_{C} \) is our correlation function, or Jastrow factor. -We need to optimize the \( \nabla \Psi_T / \Psi_T \) ratio and the second derivative as well, that is -the \( \mathbf{\nabla}^2 \Psi_T/\Psi_T \) ratio. The first is needed when we compute the so-called quantum force in importance sampling. -The second is needed when we compute the kinetic energy term of the local energy. +

    and using the definition of the +\( \delta \)-function

    $$ -\frac{\mathbf{\mathbf{\nabla}} \Psi}{\Psi} = \frac{\mathbf{\nabla} (\Psi_{OB} \, \Psi_{C})}{\Psi_{OB} \, \Psi_{C}} = \frac{ \Psi_C \mathbf{\nabla} \Psi_{OB} + \Psi_{OB} \mathbf{\nabla} \Psi_{C}}{\Psi_{OB} \Psi_{C}} = \frac{\mathbf{\nabla} \Psi_{OB}}{\Psi_{OB}} + \frac{\mathbf{\nabla} \Psi_C}{ \Psi_C} + \delta(\mathbf{x}) = \frac{1}{2\pi} \int_{-\infty}^{\infty}d\mathbf{k} \exp{(i\mathbf{kx})}, +$$ + +

    we see that

    +$$ + \tilde{w}(\mathbf{k},0)=1/2\pi. $$
    @@ -404,7 +326,7 @@

    Importance sampling
  • 37
  • 38
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs029.html b/doc/pub/week3/html/._week3-bs029.html index 9c454b9f..1d9d0ee8 100644 --- a/doc/pub/week3/html/._week3-bs029.html +++ b/doc/pub/week3/html/._week3-bs029.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,17 +273,19 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    The expectation value of the kinetic energy expressed in atomic units for electron \( i \) is

    +

    We can then use the Fourier-transformed diffusion equation

    $$ - \langle \hat{K}_i \rangle = -\frac{1}{2}\frac{\langle\Psi|\mathbf{\nabla}_{i}^2|\Psi \rangle}{\langle\Psi|\Psi \rangle}, + \frac{\partial \tilde{w}(\mathbf{k},t)}{\partial t} = -D\mathbf{k}^2\tilde{w}(\mathbf{k},t), $$ +

    with the obvious solution

    $$ -\hat{K}_i = -\frac{1}{2}\frac{\mathbf{\nabla}_{i}^{2} \Psi}{\Psi}. + \tilde{w}(\mathbf{k},t)=\tilde{w}(\mathbf{k},0)\exp{\left[-(D\mathbf{k}^2t)\right)}= + \frac{1}{2\pi}\exp{\left[-(D\mathbf{k}^2t)\right]}. $$
    @@ -398,7 +316,7 @@

    Importance sampling

  • 38
  • 39
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs030.html b/doc/pub/week3/html/._week3-bs030.html index 700611fc..844bfb6e 100644 --- a/doc/pub/week3/html/._week3-bs030.html +++ b/doc/pub/week3/html/._week3-bs030.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,19 +273,24 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    The second derivative which enters the definition of the local energy is

    +

    With the Fourier transform we obtain

    $$ -\frac{\mathbf{\nabla}^2 \Psi}{\Psi}=\frac{\mathbf{\nabla}^2 \Psi_{OB}}{\Psi_{OB}} + \frac{\mathbf{\nabla}^2 \Psi_C}{ \Psi_C} + 2 \frac{\mathbf{\nabla} \Psi_{OB}}{\Psi_{OB}}\cdot\frac{\mathbf{\nabla} \Psi_C}{ \Psi_C} + w(\mathbf{x},t)=\int_{-\infty}^{\infty}d\mathbf{k} \exp{\left[i\mathbf{kx}\right]}\frac{1}{2\pi}\exp{\left[-(D\mathbf{k}^2t)\right]}= + \frac{1}{\sqrt{4\pi Dt}}\exp{\left[-(\mathbf{x}^2/4Dt)\right]}, $$ -

    We discuss here how to calculate these quantities in an optimal way,

    +

    with the normalization condition

    +$$ + \int_{-\infty}^{\infty}w(\mathbf{x},t)d\mathbf{x}=1. +$$
    +

    diff --git a/doc/pub/week3/html/._week3-bs031.html b/doc/pub/week3/html/._week3-bs031.html index 5ed3a8ee..fb247c06 100644 --- a/doc/pub/week3/html/._week3-bs031.html +++ b/doc/pub/week3/html/._week3-bs031.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,24 +273,29 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    We have defined the correlated function as

    -$$ -\Psi_C=\prod_{i < j}g(r_{ij})=\prod_{i < j}^Ng(r_{ij})= \prod_{i=1}^N\prod_{j=i+1}^Ng(r_{ij}), -$$ - -

    with -\( r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2+(z_i-z_j)^2} \) in three dimensions or -\( r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2} \) if we work with two-dimensional systems. +

    The solution represents the probability of finding +our random walker at position \( \mathbf{x} \) at time \( t \) if the initial distribution +was placed at \( \mathbf{x}=0 \) at \( t=0 \).

    -

    In our particular case we have

    +

    There is another interesting feature worth observing. The discrete transition probability \( W \) +itself is given by a binomial distribution. +The results from the central limit theorem state that +transition probability in the limit \( n\rightarrow \infty \) converges to the normal +distribution. It is then possible to show that +

    $$ -\Psi_C=\prod_{i < j}g(r_{ij})=\exp{\left\{\sum_{i < j}f(r_{ij})\right\}}. + W(il-jl,n\epsilon)\rightarrow W(\mathbf{y},t+\Delta t|\mathbf{x},t)= + \frac{1}{\sqrt{4\pi D\Delta t}}\exp{\left[-((\mathbf{y}-\mathbf{x})^2/4D\Delta t)\right]}, $$ + +

    and that it satisfies the normalization condition and is itself a solution +to the diffusion equation. +

    @@ -404,7 +325,7 @@

    Importance sampling

  • 40
  • 41
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs032.html b/doc/pub/week3/html/._week3-bs032.html index 92ca5d3a..9f26754f 100644 --- a/doc/pub/week3/html/._week3-bs032.html +++ b/doc/pub/week3/html/._week3-bs032.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,27 +273,31 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    The total number of different relative distances \( r_{ij} \) is \( N(N-1)/2 \). In a matrix storage format, the relative distances form a strictly upper triangular matrix

    +

    Let us now assume that we have three PDFs for times \( t_0 < t' < t \), that is +\( w(\mathbf{x}_0,t_0) \), \( w(\mathbf{x}',t') \) and \( w(\mathbf{x},t) \). +We have then +

    $$ - \mathbf{r} \equiv \begin{pmatrix} - 0 & r_{1,2} & r_{1,3} & \cdots & r_{1,N} \\ - \vdots & 0 & r_{2,3} & \cdots & r_{2,N} \\ - \vdots & \vdots & 0 & \ddots & \vdots \\ - \vdots & \vdots & \vdots & \ddots & r_{N-1,N} \\ - 0 & 0 & 0 & \cdots & 0 - \end{pmatrix}. + w(\mathbf{x},t)= \int_{-\infty}^{\infty} W(\mathbf{x}.t|\mathbf{x}'.t')w(\mathbf{x}',t')d\mathbf{x}', $$ -

    This applies to \( \mathbf{g} = \mathbf{g}(r_{ij}) \) as well.

    +

    and

    +$$ + w(\mathbf{x},t)= \int_{-\infty}^{\infty} W(\mathbf{x}.t|\mathbf{x}_0.t_0)w(\mathbf{x}_0,t_0)d\mathbf{x}_0, +$$ -

    In our algorithm we will move one particle at the time, say the \( kth \)-particle. This sampling will be seen to be particularly efficient when we are going to compute a Slater determinant.

    +

    and

    +$$ + w(\mathbf{x}',t')= \int_{-\infty}^{\infty} W(\mathbf{x}'.t'|\mathbf{x}_0,t_0)w(\mathbf{x}_0,t_0)d\mathbf{x}_0. +$$
    +

    diff --git a/doc/pub/week3/html/._week3-bs033.html b/doc/pub/week3/html/._week3-bs033.html index 59d11d4e..2d6ff856 100644 --- a/doc/pub/week3/html/._week3-bs033.html +++ b/doc/pub/week3/html/._week3-bs033.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,29 +273,20 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    We have that the ratio between Jastrow factors \( R_C \) is given by

    -$$ -R_{C} = \frac{\Psi_{C}^\mathrm{new}}{\Psi_{C}^\mathrm{cur}} = -\prod_{i=1}^{k-1}\frac{g_{ik}^\mathrm{new}}{g_{ik}^\mathrm{cur}} -\prod_{i=k+1}^{N}\frac{ g_{ki}^\mathrm{new}} {g_{ki}^\mathrm{cur}}. -$$ - -

    For the Pade-Jastrow form

    +

    We can combine these equations and arrive at the famous Einstein-Smoluchenski-Kolmogorov-Chapman (ESKC) relation

    $$ - R_{C} = \frac{\Psi_{C}^\mathrm{new}}{\Psi_{C}^\mathrm{cur}} = -\frac{\exp{U_{new}}}{\exp{U_{cur}}} = \exp{\Delta U}, + W(\mathbf{x}t|\mathbf{x}_0t_0) = \int_{-\infty}^{\infty} W(\mathbf{x},t|\mathbf{x}',t')W(\mathbf{x}',t'|\mathbf{x}_0,t_0)d\mathbf{x}'. $$ -

    where

    +

    We can replace the spatial dependence with a dependence upon say the velocity +(or momentum), that is we have +

    $$ -\Delta U = -\sum_{i=1}^{k-1}\big(f_{ik}^\mathrm{new}-f_{ik}^\mathrm{cur}\big) -+ -\sum_{i=k+1}^{N}\big(f_{ki}^\mathrm{new}-f_{ki}^\mathrm{cur}\big) + W(\mathbf{v},t|\mathbf{v}_0,t_0) = \int_{-\infty}^{\infty} W(\mathbf{v},t|\mathbf{v}',t')W(\mathbf{v}',t'|\mathbf{v}_0,t_0)d\mathbf{x}'. $$
    @@ -410,7 +317,7 @@

    Importance sampling

  • 42
  • 43
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs034.html b/doc/pub/week3/html/._week3-bs034.html index 407aed4f..81646da5 100644 --- a/doc/pub/week3/html/._week3-bs034.html +++ b/doc/pub/week3/html/._week3-bs034.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,24 +273,25 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    One needs to develop a special algorithm -that runs only through the elements of the upper triangular -matrix \( \mathbf{g} \) and have \( k \) as an index. +

    We will now derive the Fokker-Planck equation. +We start from the ESKC equation

    - -

    The expression to be derived in the following is of interest when computing the quantum force and the kinetic energy. It has the form

    $$ -\frac{\mathbf{\nabla}_i\Psi_C}{\Psi_C} = \frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_i}, + W(\mathbf{x},t|\mathbf{x}_0,t_0) = \int_{-\infty}^{\infty} W(\mathbf{x},t|\mathbf{x}',t')W(\mathbf{x}',t'|\mathbf{x}_0,t_0)d\mathbf{x}'. $$ -

    for all dimensions and with \( i \) running over all particles.

    +

    Define \( s=t'-t_0 \), \( \tau=t-t' \) and \( t-t_0=s+\tau \). We have then

    +$$ + W(\mathbf{x},s+\tau|\mathbf{x}_0) = \int_{-\infty}^{\infty} W(\mathbf{x},\tau|\mathbf{x}')W(\mathbf{x}',s|\mathbf{x}_0)d\mathbf{x}'. +$$
    +

    diff --git a/doc/pub/week3/html/._week3-bs035.html b/doc/pub/week3/html/._week3-bs035.html index 9d6068a9..cd2b8e76 100644 --- a/doc/pub/week3/html/._week3-bs035.html +++ b/doc/pub/week3/html/._week3-bs035.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,27 +273,16 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    For the first derivative only \( N-1 \) terms survive the ratio because the \( g \)-terms that are not differentiated cancel with their corresponding ones in the denominator. Then,

    -$$ -\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = -\sum_{i=1}^{k-1}\frac{1}{g_{ik}}\frac{\partial g_{ik}}{\partial x_k} -+ -\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\partial g_{ki}}{\partial x_k}. -$$ - -

    An equivalent equation is obtained for the exponential form after replacing \( g_{ij} \) by \( \exp(f_{ij}) \), yielding:

    +

    Assume now that \( \tau \) is very small so that we can make an expansion in terms of a small step \( xi \), with \( \mathbf{x}'=\mathbf{x}-\xi \), that is

    $$ -\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = -\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} -+ -\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k}, + W(\mathbf{x},s|\mathbf{x}_0)+\frac{\partial W}{\partial s}\tau +O(\tau^2) = \int_{-\infty}^{\infty} W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0)d\mathbf{x}'. $$ -

    with both expressions scaling as \( \mathcal{O}(N) \).

    +

    We assume that \( W(\mathbf{x},\tau|\mathbf{x}-\xi) \) takes non-negligible values only when \( \xi \) is small. This is just another way of stating the Master equation!!

    @@ -407,7 +312,7 @@

    Importance sampling

  • 44
  • 45
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs036.html b/doc/pub/week3/html/._week3-bs036.html index 993afc94..daa68a6e 100644 --- a/doc/pub/week3/html/._week3-bs036.html +++ b/doc/pub/week3/html/._week3-bs036.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,28 +273,18 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    - -

    Using the identity

    -$$ -\frac{\partial}{\partial x_i}g_{ij} = -\frac{\partial}{\partial x_j}g_{ij}, -$$ - -

    we get expressions where all the derivatives acting on the particle are represented by the second index of \( g \):

    -$$ -\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = -\sum_{i=1}^{k-1}\frac{1}{g_{ik}}\frac{\partial g_{ik}}{\partial x_k} --\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\partial g_{ki}}{\partial x_i}, -$$ - -

    and for the exponential case:

    +

    We say thus that \( \mathbf{x} \) changes only by a small amount in the time interval \( \tau \). +This means that we can make a Taylor expansion in terms of \( \xi \), that is we +expand +

    $$ -\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = -\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} --\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i}. +W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0) = +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n}\left[W(\mathbf{x}+\xi,\tau|\mathbf{x})W(\mathbf{x},s|\mathbf{x}_0) +\right]. $$
    @@ -409,7 +315,7 @@

    Importance sampling

  • 45
  • 46
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs037.html b/doc/pub/week3/html/._week3-bs037.html index ba291df9..60758ede 100644 --- a/doc/pub/week3/html/._week3-bs037.html +++ b/doc/pub/week3/html/._week3-bs037.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,21 +273,20 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    For correlation forms depending only on the scalar distances \( r_{ij} \) we can use the chain rule. Noting that

    +

    We can then rewrite the ESKC equation as

    $$ -\frac{\partial g_{ij}}{\partial x_j} = \frac{\partial g_{ij}}{\partial r_{ij}} \frac{\partial r_{ij}}{\partial x_j} = \frac{x_j - x_i}{r_{ij}} \frac{\partial g_{ij}}{\partial r_{ij}}, +\frac{\partial W}{\partial s}\tau=-W(\mathbf{x},s|\mathbf{x}_0)+ +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi\right]. $$ -

    we arrive at

    -$$ -\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = -\sum_{i=1}^{k-1}\frac{1}{g_{ik}} \frac{\mathbf{r_{ik}}}{r_{ik}} \frac{\partial g_{ik}}{\partial r_{ik}} --\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\mathbf{r_{ki}}}{r_{ki}}\frac{\partial g_{ki}}{\partial r_{ki}}. -$$ +

    We have neglected higher powers of \( \tau \) and have used that for \( n=0 \) +we get simply \( W(\mathbf{x},s|\mathbf{x}_0) \) due to normalization. +

    @@ -401,7 +316,7 @@

    Importance sampling

  • 46
  • 47
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs038.html b/doc/pub/week3/html/._week3-bs038.html index 8665f6ec..5f4a911a 100644 --- a/doc/pub/week3/html/._week3-bs038.html +++ b/doc/pub/week3/html/._week3-bs038.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,28 +273,19 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    Note that for the Pade-Jastrow form we can set \( g_{ij} \equiv g(r_{ij}) = e^{f(r_{ij})} = e^{f_{ij}} \) and

    -$$ -\frac{\partial g_{ij}}{\partial r_{ij}} = g_{ij} \frac{\partial f_{ij}}{\partial r_{ij}}. -$$ - -

    Therefore,

    +

    We say thus that \( \mathbf{x} \) changes only by a small amount in the time interval \( \tau \). +This means that we can make a Taylor expansion in terms of \( \xi \), that is we +expand +

    $$ -\frac{1}{\Psi_{C}}\frac{\partial \Psi_{C}}{\partial x_k} = -\sum_{i=1}^{k-1}\frac{\mathbf{r_{ik}}}{r_{ik}}\frac{\partial f_{ik}}{\partial r_{ik}} --\sum_{i=k+1}^{N}\frac{\mathbf{r_{ki}}}{r_{ki}}\frac{\partial f_{ki}}{\partial r_{ki}}, +W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0) = +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n}\left[W(\mathbf{x}+\xi,\tau|\mathbf{x})W(\mathbf{x},s|\mathbf{x}_0) +\right]. $$ - -

    where

    -$$ - \mathbf{r}_{ij} = |\mathbf{r}_j - \mathbf{r}_i| = (x_j - x_i)\mathbf{e}_1 + (y_j - y_i)\mathbf{e}_2 + (z_j - z_i)\mathbf{e}_3 -$$ - -

    is the relative distance.

    @@ -408,7 +315,7 @@

    Importance sampling

  • 47
  • 48
  • ...
  • -
  • 71
  • +
  • 49
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs039.html b/doc/pub/week3/html/._week3-bs039.html index c934b581..5f807427 100644 --- a/doc/pub/week3/html/._week3-bs039.html +++ b/doc/pub/week3/html/._week3-bs039.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,21 +273,20 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    The second derivative of the Jastrow factor divided by the Jastrow factor (the way it enters the kinetic energy) is

    +

    We can then rewrite the ESKC equation as

    $$ -\left[\frac{\mathbf{\nabla}^2 \Psi_C}{\Psi_C}\right]_x =\ -2\sum_{k=1}^{N} -\sum_{i=1}^{k-1}\frac{\partial^2 g_{ik}}{\partial x_k^2}\ +\ -\sum_{k=1}^N -\left( -\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} - -\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i} -\right)^2 +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}\tau=-W(\mathbf{x},s|\mathbf{x}_0)+ +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi\right]. $$ + +

    We have neglected higher powers of \( \tau \) and have used that for \( n=0 \) +we get simply \( W(\mathbf{x},s|\mathbf{x}_0) \) due to normalization. +

    @@ -400,8 +315,6 @@

    Importance sampling

  • 47
  • 48
  • 49
  • -
  • ...
  • -
  • 71
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs040.html b/doc/pub/week3/html/._week3-bs040.html index 277e26da..1fe51d09 100644 --- a/doc/pub/week3/html/._week3-bs040.html +++ b/doc/pub/week3/html/._week3-bs040.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,22 +273,21 @@

     

     

     

    -

    Importance sampling

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -

    But we have a simple form for the function, namely

    +

    We simplify the above by introducing the moments

    $$ -\Psi_{C}=\prod_{i < j}\exp{f(r_{ij})}, +M_n=\frac{1}{\tau}\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi= +\frac{\langle [\Delta x(\tau)]^n\rangle}{\tau}, $$ -

    and it is easy to see that for particle \( k \) -we have -

    +

    resulting in

    $$ - \frac{\mathbf{\nabla}^2_k \Psi_C}{\Psi_C }= -\sum_{ij\ne k}\frac{(\mathbf{r}_k-\mathbf{r}_i)(\mathbf{r}_k-\mathbf{r}_j)}{r_{ki}r_{kj}}f'(r_{ki})f'(r_{kj})+ -\sum_{j\ne k}\left( f''(r_{kj})+\frac{2}{r_{kj}}f'(r_{kj})\right) +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}= +\sum_{n=1}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)M_n\right]. $$
    @@ -401,9 +316,6 @@

    Importance sampling

  • 47
  • 48
  • 49
  • -
  • 50
  • -
  • ...
  • -
  • 71
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs041.html b/doc/pub/week3/html/._week3-bs041.html index 187c149e..b12f99f2 100644 --- a/doc/pub/week3/html/._week3-bs041.html +++ b/doc/pub/week3/html/._week3-bs041.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,38 +273,22 @@

     

     

     

    -

    Use the C++ random class for random number generations

    +

    Importance sampling, Fokker-Planck and Langevin equations

    +

    When \( \tau \rightarrow 0 \) we assume that \( \langle [\Delta x(\tau)]^n\rangle \rightarrow 0 \) more rapidly than \( \tau \) itself if \( n > 2 \). +When \( \tau \) is much larger than the standard correlation time of +system then \( M_n \) for \( n > 2 \) can normally be neglected. +This means that fluctuations become negligible at large time scales. +

    - - -
    -
    -
    -
    -
    -
     // Initialize the seed and call the Mersienne algo
    -  std::random_device rd;
    -  std::mt19937_64 gen(rd());
    -  // Set up the uniform distribution for x \in [[0, 1]
    -  std::uniform_real_distribution<double> UniformNumberGenerator(0.0,1.0);
    -  std::normal_distribution<double> Normaldistribution(0.0,1.0);
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    +

    If we neglect such terms we can rewrite the ESKC equation as

    +$$ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}= +-\frac{\partial M_1W(\mathbf{x},s|\mathbf{x}_0)}{\partial x}+ +\frac{1}{2}\frac{\partial^2 M_2W(\mathbf{x},s|\mathbf{x}_0)}{\partial x^2}. +$$
    @@ -415,10 +315,6 @@

    47
  • 48
  • 49
  • -
  • 50
  • -
  • 51
  • -
  • ...
  • -
  • 71
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs042.html b/doc/pub/week3/html/._week3-bs042.html index 604a962a..1599b9eb 100644 --- a/doc/pub/week3/html/._week3-bs042.html +++ b/doc/pub/week3/html/._week3-bs042.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,40 +273,20 @@

     

     

     

    -

    Use the C++ random class for RNGs, the Mersenne twister class

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -Finding the new position for importance sampling +

    In a more compact form we have

    +$$ +\frac{\partial W}{\partial s}= +-\frac{\partial M_1W}{\partial x}+ +\frac{1}{2}\frac{\partial^2 M_2W}{\partial x^2}, +$$ - -
    -
    -
    -
    -
    -
     for (int cycles = 1; cycles <= NumberMCsamples; cycles++){ 
    -    // new position 
    -    for (int i = 0; i < NumberParticles; i++) { 
    -      for (int j = 0; j < Dimension; j++) {
    -        // gaussian deviate to compute new positions using a given timestep
    -        NewPosition(i,j) = OldPosition(i,j) + Normaldistribution(gen)*sqrt(timestep)+OldQuantumForce(i,j)*timestep*D;
    -
    -      }  
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    +

    which is the Fokker-Planck equation! It is trivial to replace +position with velocity (momentum). +

    @@ -416,11 +312,6 @@

    47
  • 48
  • 49
  • -
  • 50
  • -
  • 51
  • -
  • 52
  • -
  • ...
  • -
  • 71
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs043.html b/doc/pub/week3/html/._week3-bs043.html index 942a7c5e..e962aa3e 100644 --- a/doc/pub/week3/html/._week3-bs043.html +++ b/doc/pub/week3/html/._week3-bs043.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -357,47 +273,17 @@

     

     

     

    -

    Use the C++ random class for RNGs, the Metropolis test

    +

    Importance sampling, Fokker-Planck and Langevin equations

    -Using the uniform distribution for the Metropolis test - - -
    -
    -
    -
    -
    -
          //  Metropolis-Hastings algorithm
    -      double GreensFunction = 0.0;            
    -      for (int j = 0; j < Dimension; j++) {
    -        GreensFunction += 0.5*(OldQuantumForce(i,j)+NewQuantumForce(i,j))*
    -          (D*timestep*0.5*(OldQuantumForce(i,j)-NewQuantumForce(i,j))-NewPosition(i,j)+OldPosition(i,j));
    -      }
    -      GreensFunction = exp(GreensFunction);
    -      // The Metropolis test is performed by moving one particle at the time
    -      if(UniformNumberGenerator(gen) <= GreensFunction*NewWaveFunction*NewWaveFunction/OldWaveFunction/OldWaveFunction ) { 
    -        for (int  j = 0; j < Dimension; j++) {
    -          OldPosition(i,j) = NewPosition(i,j);
    -          OldQuantumForce(i,j) = NewQuantumForce(i,j);
    -        }
    -        OldWaveFunction = NewWaveFunction;
    -      }
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    +

    Consider a particle suspended in a liquid. On its path through the liquid it will continuously collide with the liquid molecules. Because on average the particle will collide more often on the front side than on the back side, it will experience a systematic force proportional with its velocity, and directed opposite to its velocity. Besides this systematic force the particle will experience a stochastic force \( \mathbf{F}(t) \). +The equations of motion are +

    +
      +
    • \( \frac{d\mathbf{r}}{dt}=\mathbf{v} \) and
    • +
    • \( \frac{d\mathbf{v}}{dt}=-\xi \mathbf{v}+\mathbf{F} \).
    • +
    @@ -422,12 +308,6 @@

    Use
  • 47
  • 48
  • 49
  • -
  • 50
  • -
  • 51
  • -
  • 52
  • -
  • 53
  • -
  • ...
  • -
  • 71
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs044.html b/doc/pub/week3/html/._week3-bs044.html index 2848ed71..a78480b0 100644 --- a/doc/pub/week3/html/._week3-bs044.html +++ b/doc/pub/week3/html/._week3-bs044.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -361,20 +277,17 @@

    -

    A stochastic process is simply a function of two variables, one is the time, -the other is a stochastic variable \( X \), defined by specifying -

    -
      -
    • the set \( \left\{x\right\} \) of possible values for \( X \);
    • -
    • the probability distribution, \( w_X(x) \), over this set, or briefly \( w(x) \)
    • -
    -

    The set of values \( \left\{x\right\} \) for \( X \) -may be discrete, or continuous. If the set of -values is continuous, then \( w_X (x) \) is a probability density so that -\( w_X (x)dx \) -is the probability that one finds the stochastic variable \( X \) to have values -in the range \( [x, x + dx] \) . -

    +

    From hydrodynamics we know that the friction constant \( \xi \) is given by

    +$$ +\xi =6\pi \eta a/m +$$ + +

    where \( \eta \) is the viscosity of the solvent and a is the radius of the particle .

    + +

    Solving the second equation in the previous slide we get

    +$$ +\mathbf{v}(t)=\mathbf{v}_{0}e^{-\xi t}+\int_{0}^{t}d\tau e^{-\xi (t-\tau )}\mathbf{F }(\tau ). +$$

    @@ -398,13 +311,6 @@

    47
  • 48
  • 49
  • -
  • 50
  • -
  • 51
  • -
  • 52
  • -
  • 53
  • -
  • 54
  • -
  • ...
  • -
  • 71
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs045.html b/doc/pub/week3/html/._week3-bs045.html index fd0286e2..c52e83eb 100644 --- a/doc/pub/week3/html/._week3-bs045.html +++ b/doc/pub/week3/html/._week3-bs045.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -361,28 +277,18 @@

    -

    An arbitrary number of other stochastic variables may be derived from -\( X \). For example, any \( Y \) given by a mapping of \( X \), is also a stochastic -variable. The mapping may also be time-dependent, that is, the mapping -depends on an additional variable \( t \) +

    If we want to get some useful information out of this, we have to average over all possible realizations of +\( \mathbf{F}(t) \), with the initial velocity as a condition. A useful quantity for example is

    -$$ - Y_X (t) = f (X, t) . +$$ +\langle \mathbf{v}(t)\cdot \mathbf{v}(t)\rangle_{\mathbf{v}_{0}}=v_{0}^{-\xi 2t} ++2\int_{0}^{t}d\tau e^{-\xi (2t-\tau)}\mathbf{v}_{0}\cdot \langle \mathbf{F}(\tau )\rangle_{\mathbf{v}_{0}} $$ -

    The quantity \( Y_X (t) \) is called a random function, or, since \( t \) often is time, -a stochastic process. A stochastic process is a function of two variables, -one is the time, the other is a stochastic variable \( X \). Let \( x \) be one of the -possible values of \( X \) then -

    +$$ + +\int_{0}^{t}d\tau ^{\prime }\int_{0}^{t}d\tau e^{-\xi (2t-\tau -\tau ^{\prime })} +\langle \mathbf{F}(\tau )\cdot \mathbf{F}(\tau ^{\prime })\rangle_{ \mathbf{v}_{0}}. $$ - y(t) = f (x, t), -$$ - -

    is a function of \( t \), called a sample function or realization of the process. -In physics one considers the stochastic process to be an ensemble of such -sample functions. -

    @@ -405,14 +311,6 @@

    47
  • 48
  • 49
  • -
  • 50
  • -
  • 51
  • -
  • 52
  • -
  • 53
  • -
  • 54
  • -
  • 55
  • -
  • ...
  • -
  • 71
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs046.html b/doc/pub/week3/html/._week3-bs046.html index ac6a267d..5c259d51 100644 --- a/doc/pub/week3/html/._week3-bs046.html +++ b/doc/pub/week3/html/._week3-bs046.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -361,25 +277,30 @@

    -

    For many physical systems initial distributions of a stochastic -variable \( y \) tend to equilibrium distributions: \( w(y, t)\rightarrow w_0(y) \) -as \( t\rightarrow\infty \). In -equilibrium detailed balance constrains the transition rates +

    In order to continue we have to make some assumptions about the conditional averages of the stochastic forces. +In view of the chaotic character of the stochastic forces the following +assumptions seem to be appropriate

    +$$ +\langle \mathbf{F}(t)\rangle=0, $$ - W(y\rightarrow y')w(y ) = W(y'\rightarrow y)w_0 (y), + +

    and

    +$$ +\langle \mathbf{F}(t)\cdot \mathbf{F}(t^{\prime })\rangle_{\mathbf{v}_{0}}= C_{\mathbf{v}_{0}}\delta (t-t^{\prime }). $$ -

    where \( W(y'\rightarrow y) \) -is the probability, per unit time, that the system changes -from a state \( |y\rangle \) , characterized by the value \( y \) -for the stochastic variable \( Y \) , to a state \( |y'\rangle \). -

    +

    We omit the subscript \( \mathbf{v}_{0} \), when the quantity of interest turns out to be independent of \( \mathbf{v}_{0} \). Using the last three equations we get

    +$$ +\langle \mathbf{v}(t)\cdot \mathbf{v}(t)\rangle_{\mathbf{v}_{0}}=v_{0}^{2}e^{-2\xi t}+\frac{C_{\mathbf{v}_{0}}}{2\xi }(1-e^{-2\xi t}). +$$ -

    Note that for a system in equilibrium the transition rate -\( W(y'\rightarrow y) \) and -the reverse \( W(y\rightarrow y') \) may be very different. -

    +

    For large t this should be equal to 3kT/m, from which it follows that

    +$$ +\langle \mathbf{F}(t)\cdot \mathbf{F}(t^{\prime })\rangle =6\frac{kT}{m}\xi \delta (t-t^{\prime }). +$$ + +

    This result is called the fluctuation-dissipation theorem .

    @@ -401,15 +322,6 @@

    47
  • 48
  • 49
  • -
  • 50
  • -
  • 51
  • -
  • 52
  • -
  • 53
  • -
  • 54
  • -
  • 55
  • -
  • 56
  • -
  • ...
  • -
  • 71
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs047.html b/doc/pub/week3/html/._week3-bs047.html index dbf805cf..7c2cdb59 100644 --- a/doc/pub/week3/html/._week3-bs047.html +++ b/doc/pub/week3/html/._week3-bs047.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -361,22 +277,21 @@

    -

    Consider, for instance, a simple -system that has only two energy levels \( \epsilon_0 = 0 \) and -\( \epsilon_1 = \Delta E \). -

    +

    Integrating

    +$$ +\mathbf{v}(t)=\mathbf{v}_{0}e^{-\xi t}+\int_{0}^{t}d\tau e^{-\xi (t-\tau )}\mathbf{F }(\tau ), +$$ -

    For a system governed by the Boltzmann distribution we find (the partition function has been taken out)

    +

    we get

    $$ - W(0\rightarrow 1)\exp{-(\epsilon_0/kT)} = W(1\rightarrow 0)\exp{-(\epsilon_1/kT)} +\mathbf{r}(t)=\mathbf{r}_{0}+\mathbf{v}_{0}\frac{1}{\xi }(1-e^{-\xi t})+ +\int_0^td\tau \int_0^{\tau}\tau ^{\prime } e^{-\xi (\tau -\tau ^{\prime })}\mathbf{F}(\tau ^{\prime }), $$ -

    We get then

    +

    from which we calculate the mean square displacement

    $$ - \frac{W(1\rightarrow 0)}{W(0 \rightarrow 1)}=\exp{-(\Delta E/kT)}, +\langle ( \mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle _{\mathbf{v}_{0}}=\frac{v_0^2}{\xi}(1-e^{-\xi t})^{2}+\frac{3kT}{m\xi ^{2}}(2\xi t-3+4e^{-\xi t}-e^{-2\xi t}). $$ - -

    which goes to zero when \( T \) tends to zero.

    @@ -397,16 +312,6 @@

    47
  • 48
  • 49
  • -
  • 50
  • -
  • 51
  • -
  • 52
  • -
  • 53
  • -
  • 54
  • -
  • 55
  • -
  • 56
  • -
  • 57
  • -
  • ...
  • -
  • 71
  • »
  • diff --git a/doc/pub/week3/html/._week3-bs048.html b/doc/pub/week3/html/._week3-bs048.html index ec02c619..77fa06e0 100644 --- a/doc/pub/week3/html/._week3-bs048.html +++ b/doc/pub/week3/html/._week3-bs048.html @@ -8,8 +8,8 @@ - -Week 5 January 30-February 3: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations + +Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations @@ -36,43 +36,10 @@
  • Overview of week 5, January 30-February 3
  • -
  • Basics of the Metropolis Algorithm
  • -
  • The basic of the Metropolis Algorithm
  • -
  • More on the Metropolis
  • -
  • Metropolis algorithm, setting it up
  • -
  • Metropolis continues
  • -
  • Detailed Balance
  • -
  • More on Detailed Balance
  • -
  • Dynamical Equation
  • -
  • Interpreting the Metropolis Algorithm
  • -
  • Gershgorin bounds and Metropolis
  • -
  • Normalizing the Eigenvectors
  • -
  • More Metropolis analysis
  • -
  • Final Considerations I
  • -
  • Final Considerations II
  • -
  • Final Considerations III
  • -
  • Importance Sampling: Overview of what needs to be coded
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Code example for the interacting case with importance sampling
  • -
  • Importance sampling, program elements in C++
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling, program elements
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Importance sampling
  • -
  • Use the "C++ random class for random number generations":"http://www.cplusplus.com/reference/random/"
  • -
  • Use the C++ random class for RNGs, the Mersenne twister class
  • -
  • Use the C++ random class for RNGs, the Metropolis test
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • -
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Overview of week 5, January 29-February 2
  • +
  • Importance Sampling: Overview of what needs to be coded
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Code example for the interacting case with importance sampling
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling, program elements
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • +
  • Importance sampling, Fokker-Planck and Langevin equations
  • @@ -361,28 +277,17 @@

    -

    If we assume a discrete set of events, -our initial probability -distribution function can be given by -

    +

    For very large \( t \) this becomes

    $$ - w_i(0) = \delta_{i,0}, +\langle (\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle =\frac{6kT}{m\xi }t $$ -

    and its time-development after a given time step \( \Delta t=\epsilon \) is

    +

    from which we get the Einstein relation

    $$ - w_i(t) = \sum_{j}W(j\rightarrow i)w_j(t=0). +D= \frac{kT}{m\xi } $$ -

    The continuous analog to \( w_i(0) \) is

    -$$ - w(\mathbf{x})\rightarrow \delta(\mathbf{x}), -$$ - -

    where we now have generalized the one-dimensional position \( x \) to a generic-dimensional -vector \( \mathbf{x} \). The Kroenecker \( \delta \) function is replaced by the \( \delta \) distribution -function \( \delta(\mathbf{x}) \) at \( t=0 \). -

    +

    where we have used \( \langle (\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle =6Dt \).

    @@ -402,18 +307,6 @@

    47
  • 48
  • 49
  • -
  • 50
  • -
  • 51
  • -
  • 52
  • -
  • 53
  • -
  • 54
  • -
  • 55
  • -
  • 56
  • -
  • 57
  • -
  • 58
  • -
  • ...
  • -
  • 71
  • -
  • »
  • diff --git a/doc/pub/week3/html/week3-bs.html b/doc/pub/week3/html/week3-bs.html index 59267aee..ba242581 100644 --- a/doc/pub/week3/html/week3-bs.html +++ b/doc/pub/week3/html/week3-bs.html @@ -36,52 +36,150 @@ @@ -116,28 +214,54 @@ @@ -167,7 +291,7 @@

    Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importa
    -

    February 6-10

    +

    February 2


    @@ -192,7 +316,7 @@

    February 6-10

  • 9
  • 10
  • ...
  • -
  • 23
  • +
  • 49
  • »
  • @@ -206,7 +330,7 @@

    February 6-10

    -->
    - © 1999-2023, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license + © 1999-2024, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license
    diff --git a/doc/pub/week3/html/week3-reveal.html b/doc/pub/week3/html/week3-reveal.html index 87a14748..7edc3666 100644 --- a/doc/pub/week3/html/week3-reveal.html +++ b/doc/pub/week3/html/week3-reveal.html @@ -184,23 +184,23 @@

    Week 5 January 29-February 2: Metropolis Algorit
    -

    February 6-10

    +

    February 2


    - © 1999-2023, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license + © 1999-2024, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license
    -

    Overview of week 5

    +

    Overview of week 5, January 29-February 2

    Topics

      -

    • Markov Chain Monte Carlo
    • +

    • Markov Chain Monte Carlo and repetition from last week
    • Metropolis-Hastings sampling and Importance Sampling
    @@ -213,353 +213,10 @@

    Overview of week 5

  • Overview video on Metropolis algoritm
  • Video of lecture tba
  • Handwritten notes tba
  • -

  • See also Lectures from FYS3150/4150 on the Metropolis Algorithm
  • -
    -

    Basics of the Metropolis Algorithm

    - -

    The Metropolis et al. -algorithm was invented by Metropolis et. a -and is often simply called the Metropolis algorithm. -It is a method to sample a normalized probability -distribution by a stochastic process. We define \( {\cal P}_i^{(n)} \) to -be the probability for finding the system in the state \( i \) at step \( n \). -The algorithm is then -

    -
    - -
    -

    The basic of the Metropolis Algorithm

    - - -

    -

    We wish to derive the required properties of \( T \) and \( A \) such that -\( {\cal P}_i^{(n\rightarrow \infty)} \rightarrow p_i \) so that starting -from any distribution, the method converges to the correct distribution. -Note that the description here is for a discrete probability distribution. -Replacing probabilities \( p_i \) with expressions like \( p(x_i)dx_i \) will -take all of these over to the corresponding continuum expressions. -

    -
    - -
    -

    More on the Metropolis

    - -

    The dynamical equation for \( {\cal P}_i^{(n)} \) can be written directly from -the description above. The probability of being in the state \( i \) at step \( n \) -is given by the probability of being in any state \( j \) at the previous step, -and making an accepted transition to \( i \) added to the probability of -being in the state \( i \), making a transition to any state \( j \) and -rejecting the move: -

    -

     
    -$$ -\begin{equation} -\tag{1} -{\cal P}^{(n)}_i = \sum_j \left [ -{\cal P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i} -+{\cal P}^{(n-1)}_iT_{i\rightarrow j}\left ( 1- A_{i\rightarrow j} \right) -\right ] \,. -\end{equation} -$$ -

     
    -

    - -
    -

    Metropolis algorithm, setting it up

    -

    Since the probability of making some transition must be 1, -\( \sum_j T_{i\rightarrow j} = 1 \), and Eq. (1) becomes -

    - -

     
    -$$ -\begin{equation} -{\cal P}^{(n)}_i = {\cal P}^{(n-1)}_i + - \sum_j \left [ -{\cal P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i} --{\cal P}^{(n-1)}_iT_{i\rightarrow j}A_{i\rightarrow j} -\right ] \,. -\tag{2} -\end{equation} -$$ -

     
    -

    - -
    -

    Metropolis continues

    - -

    For large \( n \) we require that \( {\cal P}^{(n\rightarrow \infty)}_i = p_i \), -the desired probability distribution. Taking this limit, gives the -balance requirement -

    - -

     
    -$$ -\begin{equation} -\sum_j \left [p_jT_{j\rightarrow i} A_{j\rightarrow i}-p_iT_{i\rightarrow j}A_{i\rightarrow j} -\right ] = 0, -\tag{3} -\end{equation} -$$ -

     
    -

    - -
    -

    Detailed Balance

    - -

    The balance requirement is very weak. Typically the much stronger detailed -balance requirement is enforced, that is rather than the sum being -set to zero, we set each term separately to zero and use this -to determine the acceptance probabilities. Rearranging, the result is -

    - -

     
    -$$ -\begin{equation} -\frac{ A_{j\rightarrow i}}{A_{i\rightarrow j}} -= \frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}} \,. -\tag{4} -\end{equation} -$$ -

     
    -

    - -
    -

    More on Detailed Balance

    - -

    The Metropolis choice is to maximize the \( A \) values, that is

    - -

     
    -$$ -\begin{equation} -A_{j \rightarrow i} = \min \left ( 1, -\frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}}\right ). -\tag{5} -\end{equation} -$$ -

     
    - -

    Other choices are possible, but they all correspond to multilplying -\( A_{i\rightarrow j} \) and \( A_{j\rightarrow i} \) by the same constant -smaller than unity. The penalty function method uses just such -a factor to compensate for \( p_i \) that are evaluated stochastically -and are therefore noisy. -

    - -

    Having chosen the acceptance probabilities, we have guaranteed that -if the \( {\cal P}_i^{(n)} \) has equilibrated, that is if it is equal to \( p_i \), -it will remain equilibrated. Next we need to find the circumstances for -convergence to equilibrium. -

    -
    - -
    -

    Dynamical Equation

    - -

    The dynamical equation can be written as

    - -

     
    -$$ -\begin{equation} -{\cal P}^{(n)}_i = \sum_j M_{ij}{\cal P}^{(n-1)}_j -\tag{6} -\end{equation} -$$ -

     
    - -

    with the matrix \( M \) given by

    - -

     
    -$$ -\begin{equation} -M_{ij} = \delta_{ij}\left [ 1 -\sum_k T_{i\rightarrow k} A_{i \rightarrow k} -\right ] + T_{j\rightarrow i} A_{j\rightarrow i} \,. -\tag{7} -\end{equation} -$$ -

     
    - -

    Summing over \( i \) shows that \( \sum_i M_{ij} = 1 \), and since -\( \sum_k T_{i\rightarrow k} = 1 \), and \( A_{i \rightarrow k} \leq 1 \), the -elements of the matrix satisfy \( M_{ij} \geq 0 \). The matrix \( M \) is therefore -a stochastic matrix. -

    -
    - -
    -

    Interpreting the Metropolis Algorithm

    - -

    The Metropolis method is simply the power method for computing the -right eigenvector of \( M \) with the largest magnitude eigenvalue. -By construction, the correct probability distribution is a right eigenvector -with eigenvalue 1. Therefore, for the Metropolis method to converge -to this result, we must show that \( M \) has only one eigenvalue with this -magnitude, and all other eigenvalues are smaller. -

    - -

    Even a defective matrix has at least one left and right eigenvector for -each eigenvalue. An example of a defective matrix is -

    - -

     
    -$$ -\begin{bmatrix} -0 & 1\\ -0 & 0 \\ -\end{bmatrix}, -$$ -

     
    - -

    with two zero eigenvalues, only one right eigenvector

    - -

     
    -$$ -\begin{bmatrix} -1 \\ -0\\ -\end{bmatrix} -$$ -

     
    - -

    and only one left eigenvector \( (0\ 1) \).

    -
    - -
    -

    Gershgorin bounds and Metropolis

    - -

    The Gershgorin bounds for the eigenvalues can be derived by multiplying on -the left with the eigenvector with the maximum and minimum eigenvalues, -

    - -

     
    -$$ -\begin{align} -\sum_i \psi^{\rm max}_i M_{ij} =& \lambda_{\rm max} \psi^{\rm max}_j -\nonumber\\ -\sum_i \psi^{\rm min}_i M_{ij} =& \lambda_{\rm min} \psi^{\rm min}_j -\tag{8} -\end{align} -$$ -

     
    -

    - -
    -

    Normalizing the Eigenvectors

    - -

    Next we choose the normalization of these eigenvectors so that the -largest element (or one of the equally largest elements) -has value 1. Let's call this element \( k \), and -we can therefore bound the magnitude of the other elements to be less -than or equal to 1. -This leads to the inequalities, using the property that \( M_{ij}\geq 0 \), -

    - -

     
    -$$ -\begin{eqnarray} -\sum_i M_{ik} \leq \lambda_{\rm max} -\nonumber\\ -M_{kk}-\sum_{i \neq k} M_{ik} \geq \lambda_{\rm min} -\end{eqnarray} -$$ -

     
    - -

    where the equality from the maximum -will occur only if the eigenvector takes the value 1 for all values of -\( i \) where \( M_{ik} \neq 0 \), and the equality for the minimum will -occur only if the eigenvector takes the value -1 for all values of \( i\neq k \) -where \( M_{ik} \neq 0 \). -

    -
    - -
    -

    More Metropolis analysis

    - -

    That the maximum eigenvalue is 1 follows immediately from the property -that \( \sum_i M_{ik} = 1 \). Similarly the minimum eigenvalue can be -1, -but only if \( M_{kk} = 0 \) and the magnitude of all the other elements -\( \psi_i^{\rm min} \) of -the eigenvector that multiply nonzero elements \( M_{ik} \) are -1. -

    - -

    Let's first see what the properties of \( M \) must be -to eliminate any -1 eigenvalues. -To have a -1 eigenvalue, the left eigenvector must contain only \( \pm 1 \) -and \( 0 \) values. Taking in turn each \( \pm 1 \) value as the maximum, so that -it corresponds to the index \( k \), the nonzero \( M_{ik} \) values must -correspond to \( i \) index values of the eigenvector which have opposite -sign elements. That is, the \( M \) matrix must break up into sets of -states that always make transitions from set A to set B ... back to set A. -In particular, there can be no rejections of these moves in the cycle -since the -1 eigenvalue requires \( M_{kk}=0 \). To guarantee no eigenvalues -with eigenvalue -1, we simply have to make sure that there are no -cycles among states. Notice that this is generally trivial since such -cycles cannot have any rejections at any stage. An example of such -a cycle is sampling a noninteracting Ising spin. If the transition is -taken to flip the spin, and the energy difference is zero, the Boltzmann -factor will not change and the move will always be accepted. The system -will simply flip from up to down to up to down ad infinitum. Including -a rejection probability or using a heat bath algorithm -immediately fixes the problem. -

    -
    - -
    -

    Final Considerations I

    - -

    Next we need to make sure that there is only one left eigenvector -with eigenvalue 1. To get an eigenvalue 1, the left eigenvector must be -constructed from only ones and zeroes. It is straightforward to -see that a vector made up of -ones and zeroes can only be an eigenvector with eigenvalue 1 if the -matrix element \( M_{ij} = 0 \) for all cases where \( \psi_i \neq \psi_j \). -That is we can choose an index \( i \) and take \( \psi_i = 1 \). -We require all elements \( \psi_j \) where \( M_{ij} \neq 0 \) to also have -the value \( 1 \). Continuing we then require all elements \( \psi_\ell \) $M_{j\ell}$ -to have value \( 1 \). Only if the matrix \( M \) can be put into block diagonal -form can there be more than one choice for the left eigenvector with -eigenvalue 1. We therefore require that the transition matrix not -be in block diagonal form. This simply means that we must choose -the transition probability so that we can get from any allowed state -to any other in a series of transitions. -

    -
    - -
    -

    Final Considerations II

    - -

    Finally, we note that for a defective matrix, with more eigenvalues -than independent eigenvectors for eigenvalue 1, -the left and right -eigenvectors of eigenvalue 1 would be orthogonal. -Here the left eigenvector is all 1 -except for states that can never be reached, and the right eigenvector -is \( p_i > 0 \) except for states that give zero probability. We already -require that we can reach -all states that contribute to \( p_i \). Therefore the left and right -eigenvectors with eigenvalue 1 do not correspond to a defective sector -of the matrix and they are unique. The Metropolis algorithm therefore -converges exponentially to the desired distribution. -

    -
    - -
    -

    Final Considerations III

    - -

    The requirements for the transition \( T_{i \rightarrow j} \) are

    - -
    -

    Importance Sampling: Overview of what needs to be coded

    @@ -754,7 +411,6 @@

    Code exa from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import sys -from numba import jit,njit # Trial wave function for the 2-electron quantum dot in two dims @@ -809,9 +465,6 @@

    Code exa
    # The Monte Carlo sampling with the Metropolis algo
    -# jit decorator tells Numba to compile this function.
    -# The argument types will be inferred by Numba when function is called.
    -@jit()
     def MonteCarloSampling():
     
         NumberMCcycles= 100000
    @@ -946,6 +599,1048 @@ 

    Code exa

    +
    +

    Importance sampling, program elements

    +
    + +

    +

    The general derivative formula of the Jastrow factor is (the subscript \( C \) stands for Correlation)

    +

     
    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k} +$$ +

     
    + +

    However, +with our written in way which can be reused later as +

    +

     
    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})= \exp{\left\{\sum_{i < j}f(r_{ij})\right\}}, +$$ +

     
    + +

    the gradient needed for the quantum force and local energy is easy to compute. +The function \( f(r_{ij}) \) will depends on the system under study. In the equations below we will keep this general form. +

    +
    +
    + +
    +

    Importance sampling, program elements

    +
    + +

    +

    In the Metropolis/Hasting algorithm, the acceptance ratio determines the probability for a particle to be accepted at a new position. The ratio of the trial wave functions evaluated at the new and current positions is given by (\( OB \) for the onebody part)

    +

     
    +$$ +R \equiv \frac{\Psi_{T}^{new}}{\Psi_{T}^{old}} = +\frac{\Psi_{OB}^{new}}{\Psi_{OB}^{old}}\frac{\Psi_{C}^{new}}{\Psi_{C}^{old}} +$$ +

     
    + +

    Here \( \Psi_{OB} \) is our onebody part (Slater determinant or product of boson single-particle states) while \( \Psi_{C} \) is our correlation function, or Jastrow factor. +We need to optimize the \( \nabla \Psi_T / \Psi_T \) ratio and the second derivative as well, that is +the \( \mathbf{\nabla}^2 \Psi_T/\Psi_T \) ratio. The first is needed when we compute the so-called quantum force in importance sampling. +The second is needed when we compute the kinetic energy term of the local energy. +

    +

     
    +$$ +\frac{\mathbf{\mathbf{\nabla}} \Psi}{\Psi} = \frac{\mathbf{\nabla} (\Psi_{OB} \, \Psi_{C})}{\Psi_{OB} \, \Psi_{C}} = \frac{ \Psi_C \mathbf{\nabla} \Psi_{OB} + \Psi_{OB} \mathbf{\nabla} \Psi_{C}}{\Psi_{OB} \Psi_{C}} = \frac{\mathbf{\nabla} \Psi_{OB}}{\Psi_{OB}} + \frac{\mathbf{\nabla} \Psi_C}{ \Psi_C} +$$ +

     
    +

    +
    + +
    +

    Importance sampling

    +
    + +

    +

    The expectation value of the kinetic energy expressed in atomic units for electron \( i \) is

    +

     
    +$$ + \langle \hat{K}_i \rangle = -\frac{1}{2}\frac{\langle\Psi|\mathbf{\nabla}_{i}^2|\Psi \rangle}{\langle\Psi|\Psi \rangle}, +$$ +

     
    + +

     
    +$$ +\hat{K}_i = -\frac{1}{2}\frac{\mathbf{\nabla}_{i}^{2} \Psi}{\Psi}. +$$ +

     
    +

    +
    + +
    +

    Importance sampling

    +
    + +

    +

    The second derivative which enters the definition of the local energy is

    +

     
    +$$ +\frac{\mathbf{\nabla}^2 \Psi}{\Psi}=\frac{\mathbf{\nabla}^2 \Psi_{OB}}{\Psi_{OB}} + \frac{\mathbf{\nabla}^2 \Psi_C}{ \Psi_C} + 2 \frac{\mathbf{\nabla} \Psi_{OB}}{\Psi_{OB}}\cdot\frac{\mathbf{\nabla} \Psi_C}{ \Psi_C} +$$ +

     
    + +

    We discuss here how to calculate these quantities in an optimal way,

    +
    +
    + +
    +

    Importance sampling

    +
    + +

    +

    We have defined the correlated function as

    +

     
    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})=\prod_{i < j}^Ng(r_{ij})= \prod_{i=1}^N\prod_{j=i+1}^Ng(r_{ij}), +$$ +

     
    + +

    with +\( r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2+(z_i-z_j)^2} \) in three dimensions or +\( r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2} \) if we work with two-dimensional systems. +

    + +

    In our particular case we have

    +

     
    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})=\exp{\left\{\sum_{i < j}f(r_{ij})\right\}}. +$$ +

     
    +

    +
    + +
    +

    Importance sampling

    +
    + +

    +

    The total number of different relative distances \( r_{ij} \) is \( N(N-1)/2 \). In a matrix storage format, the relative distances form a strictly upper triangular matrix

    +

     
    +$$ + \mathbf{r} \equiv \begin{pmatrix} + 0 & r_{1,2} & r_{1,3} & \cdots & r_{1,N} \\ + \vdots & 0 & r_{2,3} & \cdots & r_{2,N} \\ + \vdots & \vdots & 0 & \ddots & \vdots \\ + \vdots & \vdots & \vdots & \ddots & r_{N-1,N} \\ + 0 & 0 & 0 & \cdots & 0 + \end{pmatrix}. +$$ +

     
    + +

    This applies to \( \mathbf{g} = \mathbf{g}(r_{ij}) \) as well.

    + +

    In our algorithm we will move one particle at the time, say the \( kth \)-particle. This sampling will be seen to be particularly efficient when we are going to compute a Slater determinant.

    +
    +
    + +
    +

    Importance sampling

    +
    + +

    +

    We have that the ratio between Jastrow factors \( R_C \) is given by

    +

     
    +$$ +R_{C} = \frac{\Psi_{C}^\mathrm{new}}{\Psi_{C}^\mathrm{cur}} = +\prod_{i=1}^{k-1}\frac{g_{ik}^\mathrm{new}}{g_{ik}^\mathrm{cur}} +\prod_{i=k+1}^{N}\frac{ g_{ki}^\mathrm{new}} {g_{ki}^\mathrm{cur}}. +$$ +

     
    + +

    For the Pade-Jastrow form

    +

     
    +$$ + R_{C} = \frac{\Psi_{C}^\mathrm{new}}{\Psi_{C}^\mathrm{cur}} = +\frac{\exp{U_{new}}}{\exp{U_{cur}}} = \exp{\Delta U}, +$$ +

     
    + +

    where

    +

     
    +$$ +\Delta U = +\sum_{i=1}^{k-1}\big(f_{ik}^\mathrm{new}-f_{ik}^\mathrm{cur}\big) ++ +\sum_{i=k+1}^{N}\big(f_{ki}^\mathrm{new}-f_{ki}^\mathrm{cur}\big) +$$ +

     
    +

    +
    + +
    +

    Importance sampling

    +
    + +

    +

    One needs to develop a special algorithm +that runs only through the elements of the upper triangular +matrix \( \mathbf{g} \) and have \( k \) as an index. +

    + +

    The expression to be derived in the following is of interest when computing the quantum force and the kinetic energy. It has the form

    +

     
    +$$ +\frac{\mathbf{\nabla}_i\Psi_C}{\Psi_C} = \frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_i}, +$$ +

     
    + +

    for all dimensions and with \( i \) running over all particles.

    +
    +
    + +
    +

    Importance sampling

    +
    + +

    +

    For the first derivative only \( N-1 \) terms survive the ratio because the \( g \)-terms that are not differentiated cancel with their corresponding ones in the denominator. Then,

    +

     
    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\partial g_{ki}}{\partial x_k}. +$$ +

     
    + +

    An equivalent equation is obtained for the exponential form after replacing \( g_{ij} \) by \( \exp(f_{ij}) \), yielding:

    +

     
    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k}, +$$ +

     
    + +

    with both expressions scaling as \( \mathcal{O}(N) \).

    +
    +
    + +
    +

    Importance sampling

    +
    + +

    + +

    Using the identity

    +

     
    +$$ +\frac{\partial}{\partial x_i}g_{ij} = -\frac{\partial}{\partial x_j}g_{ij}, +$$ +

     
    + +

    we get expressions where all the derivatives acting on the particle are represented by the second index of \( g \):

    +

     
    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}}\frac{\partial g_{ik}}{\partial x_k} +-\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\partial g_{ki}}{\partial x_i}, +$$ +

     
    + +

    and for the exponential case:

    +

     
    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} +-\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i}. +$$ +

     
    +

    +
    + +
    +

    Importance sampling

    +
    + +

    +

    For correlation forms depending only on the scalar distances \( r_{ij} \) we can use the chain rule. Noting that

    +

     
    +$$ +\frac{\partial g_{ij}}{\partial x_j} = \frac{\partial g_{ij}}{\partial r_{ij}} \frac{\partial r_{ij}}{\partial x_j} = \frac{x_j - x_i}{r_{ij}} \frac{\partial g_{ij}}{\partial r_{ij}}, +$$ +

     
    + +

    we arrive at

    +

     
    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}} \frac{\mathbf{r_{ik}}}{r_{ik}} \frac{\partial g_{ik}}{\partial r_{ik}} +-\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\mathbf{r_{ki}}}{r_{ki}}\frac{\partial g_{ki}}{\partial r_{ki}}. +$$ +

     
    +

    +
    + +
    +

    Importance sampling

    +
    + +

    +

    Note that for the Pade-Jastrow form we can set \( g_{ij} \equiv g(r_{ij}) = e^{f(r_{ij})} = e^{f_{ij}} \) and

    +

     
    +$$ +\frac{\partial g_{ij}}{\partial r_{ij}} = g_{ij} \frac{\partial f_{ij}}{\partial r_{ij}}. +$$ +

     
    + +

    Therefore,

    +

     
    +$$ +\frac{1}{\Psi_{C}}\frac{\partial \Psi_{C}}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\mathbf{r_{ik}}}{r_{ik}}\frac{\partial f_{ik}}{\partial r_{ik}} +-\sum_{i=k+1}^{N}\frac{\mathbf{r_{ki}}}{r_{ki}}\frac{\partial f_{ki}}{\partial r_{ki}}, +$$ +

     
    + +

    where

    +

     
    +$$ + \mathbf{r}_{ij} = |\mathbf{r}_j - \mathbf{r}_i| = (x_j - x_i)\mathbf{e}_1 + (y_j - y_i)\mathbf{e}_2 + (z_j - z_i)\mathbf{e}_3 +$$ +

     
    + +

    is the relative distance.

    +
    +
    + +
    +

    Importance sampling

    +
    + +

    +

    The second derivative of the Jastrow factor divided by the Jastrow factor (the way it enters the kinetic energy) is

    +

     
    +$$ +\left[\frac{\mathbf{\nabla}^2 \Psi_C}{\Psi_C}\right]_x =\ +2\sum_{k=1}^{N} +\sum_{i=1}^{k-1}\frac{\partial^2 g_{ik}}{\partial x_k^2}\ +\ +\sum_{k=1}^N +\left( +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} - +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i} +\right)^2 +$$ +

     
    +

    +
    + +
    +

    Importance sampling

    +
    + +

    +

    But we have a simple form for the function, namely

    +

     
    +$$ +\Psi_{C}=\prod_{i < j}\exp{f(r_{ij})}, +$$ +

     
    + +

    and it is easy to see that for particle \( k \) +we have +

    +

     
    +$$ + \frac{\mathbf{\nabla}^2_k \Psi_C}{\Psi_C }= +\sum_{ij\ne k}\frac{(\mathbf{r}_k-\mathbf{r}_i)(\mathbf{r}_k-\mathbf{r}_j)}{r_{ki}r_{kj}}f'(r_{ki})f'(r_{kj})+ +\sum_{j\ne k}\left( f''(r_{kj})+\frac{2}{r_{kj}}f'(r_{kj})\right) +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    A stochastic process is simply a function of two variables, one is the time, +the other is a stochastic variable \( X \), defined by specifying +

    +
      +

    • the set \( \left\{x\right\} \) of possible values for \( X \);
    • +

    • the probability distribution, \( w_X(x) \), over this set, or briefly \( w(x) \)
    • +
    +

    +

    The set of values \( \left\{x\right\} \) for \( X \) +may be discrete, or continuous. If the set of +values is continuous, then \( w_X (x) \) is a probability density so that +\( w_X (x)dx \) +is the probability that one finds the stochastic variable \( X \) to have values +in the range \( [x, x + dx] \) . +

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    An arbitrary number of other stochastic variables may be derived from +\( X \). For example, any \( Y \) given by a mapping of \( X \), is also a stochastic +variable. The mapping may also be time-dependent, that is, the mapping +depends on an additional variable \( t \) +

    +

     
    +$$ + Y_X (t) = f (X, t) . +$$ +

     
    + +

    The quantity \( Y_X (t) \) is called a random function, or, since \( t \) often is time, +a stochastic process. A stochastic process is a function of two variables, +one is the time, the other is a stochastic variable \( X \). Let \( x \) be one of the +possible values of \( X \) then +

    +

     
    +$$ + y(t) = f (x, t), +$$ +

     
    + +

    is a function of \( t \), called a sample function or realization of the process. +In physics one considers the stochastic process to be an ensemble of such +sample functions. +

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    For many physical systems initial distributions of a stochastic +variable \( y \) tend to equilibrium distributions: \( w(y, t)\rightarrow w_0(y) \) +as \( t\rightarrow\infty \). In +equilibrium detailed balance constrains the transition rates +

    +

     
    +$$ + W(y\rightarrow y')w(y ) = W(y'\rightarrow y)w_0 (y), +$$ +

     
    + +

    where \( W(y'\rightarrow y) \) +is the probability, per unit time, that the system changes +from a state \( |y\rangle \) , characterized by the value \( y \) +for the stochastic variable \( Y \) , to a state \( |y'\rangle \). +

    + +

    Note that for a system in equilibrium the transition rate +\( W(y'\rightarrow y) \) and +the reverse \( W(y\rightarrow y') \) may be very different. +

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    Consider, for instance, a simple +system that has only two energy levels \( \epsilon_0 = 0 \) and +\( \epsilon_1 = \Delta E \). +

    + +

    For a system governed by the Boltzmann distribution we find (the partition function has been taken out)

    +

     
    +$$ + W(0\rightarrow 1)\exp{-(\epsilon_0/kT)} = W(1\rightarrow 0)\exp{-(\epsilon_1/kT)} +$$ +

     
    + +

    We get then

    +

     
    +$$ + \frac{W(1\rightarrow 0)}{W(0 \rightarrow 1)}=\exp{-(\Delta E/kT)}, +$$ +

     
    + +

    which goes to zero when \( T \) tends to zero.

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    If we assume a discrete set of events, +our initial probability +distribution function can be given by +

    +

     
    +$$ + w_i(0) = \delta_{i,0}, +$$ +

     
    + +

    and its time-development after a given time step \( \Delta t=\epsilon \) is

    +

     
    +$$ + w_i(t) = \sum_{j}W(j\rightarrow i)w_j(t=0). +$$ +

     
    + +

    The continuous analog to \( w_i(0) \) is

    +

     
    +$$ + w(\mathbf{x})\rightarrow \delta(\mathbf{x}), +$$ +

     
    + +

    where we now have generalized the one-dimensional position \( x \) to a generic-dimensional +vector \( \mathbf{x} \). The Kroenecker \( \delta \) function is replaced by the \( \delta \) distribution +function \( \delta(\mathbf{x}) \) at \( t=0 \). +

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    The transition from a state \( j \) to a state \( i \) is now replaced by a transition +to a state with position \( \mathbf{y} \) from a state with position \( \mathbf{x} \). +The discrete sum of transition probabilities can then be replaced by an integral +and we obtain the new distribution at a time \( t+\Delta t \) as +

    +

     
    +$$ + w(\mathbf{y},t+\Delta t)= \int W(\mathbf{y},t+\Delta t| \mathbf{x},t)w(\mathbf{x},t)d\mathbf{x}, +$$ +

     
    + +

    and after \( m \) time steps we have

    +

     
    +$$ + w(\mathbf{y},t+m\Delta t)= \int W(\mathbf{y},t+m\Delta t| \mathbf{x},t)w(\mathbf{x},t)d\mathbf{x}. +$$ +

     
    + +

    When equilibrium is reached we have

    +

     
    +$$ + w(\mathbf{y})= \int W(\mathbf{y}|\mathbf{x}, t)w(\mathbf{x})d\mathbf{x}, +$$ +

     
    + +

    that is no time-dependence. Note our change of notation for \( W \)

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can solve the equation for \( w(\mathbf{y},t) \) by making a Fourier transform to +momentum space. +The PDF \( w(\mathbf{x},t) \) is related to its Fourier transform +\( \tilde{w}(\mathbf{k},t) \) through +

    +

     
    +$$ + w(\mathbf{x},t) = \int_{-\infty}^{\infty}d\mathbf{k} \exp{(i\mathbf{kx})}\tilde{w}(\mathbf{k},t), +$$ +

     
    + +

    and using the definition of the +\( \delta \)-function +

    +

     
    +$$ + \delta(\mathbf{x}) = \frac{1}{2\pi} \int_{-\infty}^{\infty}d\mathbf{k} \exp{(i\mathbf{kx})}, +$$ +

     
    + +

    we see that

    +

     
    +$$ + \tilde{w}(\mathbf{k},0)=1/2\pi. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can then use the Fourier-transformed diffusion equation

    +

     
    +$$ + \frac{\partial \tilde{w}(\mathbf{k},t)}{\partial t} = -D\mathbf{k}^2\tilde{w}(\mathbf{k},t), +$$ +

     
    + +

    with the obvious solution

    +

     
    +$$ + \tilde{w}(\mathbf{k},t)=\tilde{w}(\mathbf{k},0)\exp{\left[-(D\mathbf{k}^2t)\right)}= + \frac{1}{2\pi}\exp{\left[-(D\mathbf{k}^2t)\right]}. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    With the Fourier transform we obtain

    +

     
    +$$ + w(\mathbf{x},t)=\int_{-\infty}^{\infty}d\mathbf{k} \exp{\left[i\mathbf{kx}\right]}\frac{1}{2\pi}\exp{\left[-(D\mathbf{k}^2t)\right]}= + \frac{1}{\sqrt{4\pi Dt}}\exp{\left[-(\mathbf{x}^2/4Dt)\right]}, +$$ +

     
    + +

    with the normalization condition

    +

     
    +$$ + \int_{-\infty}^{\infty}w(\mathbf{x},t)d\mathbf{x}=1. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    The solution represents the probability of finding +our random walker at position \( \mathbf{x} \) at time \( t \) if the initial distribution +was placed at \( \mathbf{x}=0 \) at \( t=0 \). +

    + +

    There is another interesting feature worth observing. The discrete transition probability \( W \) +itself is given by a binomial distribution. +The results from the central limit theorem state that +transition probability in the limit \( n\rightarrow \infty \) converges to the normal +distribution. It is then possible to show that +

    +

     
    +$$ + W(il-jl,n\epsilon)\rightarrow W(\mathbf{y},t+\Delta t|\mathbf{x},t)= + \frac{1}{\sqrt{4\pi D\Delta t}}\exp{\left[-((\mathbf{y}-\mathbf{x})^2/4D\Delta t)\right]}, +$$ +

     
    + +

    and that it satisfies the normalization condition and is itself a solution +to the diffusion equation. +

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    Let us now assume that we have three PDFs for times \( t_0 < t' < t \), that is +\( w(\mathbf{x}_0,t_0) \), \( w(\mathbf{x}',t') \) and \( w(\mathbf{x},t) \). +We have then +

    +

     
    +$$ + w(\mathbf{x},t)= \int_{-\infty}^{\infty} W(\mathbf{x}.t|\mathbf{x}'.t')w(\mathbf{x}',t')d\mathbf{x}', +$$ +

     
    + +

    and

    +

     
    +$$ + w(\mathbf{x},t)= \int_{-\infty}^{\infty} W(\mathbf{x}.t|\mathbf{x}_0.t_0)w(\mathbf{x}_0,t_0)d\mathbf{x}_0, +$$ +

     
    + +

    and

    +

     
    +$$ + w(\mathbf{x}',t')= \int_{-\infty}^{\infty} W(\mathbf{x}'.t'|\mathbf{x}_0,t_0)w(\mathbf{x}_0,t_0)d\mathbf{x}_0. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can combine these equations and arrive at the famous Einstein-Smoluchenski-Kolmogorov-Chapman (ESKC) relation

    +

     
    +$$ + W(\mathbf{x}t|\mathbf{x}_0t_0) = \int_{-\infty}^{\infty} W(\mathbf{x},t|\mathbf{x}',t')W(\mathbf{x}',t'|\mathbf{x}_0,t_0)d\mathbf{x}'. +$$ +

     
    + +

    We can replace the spatial dependence with a dependence upon say the velocity +(or momentum), that is we have +

    +

     
    +$$ + W(\mathbf{v},t|\mathbf{v}_0,t_0) = \int_{-\infty}^{\infty} W(\mathbf{v},t|\mathbf{v}',t')W(\mathbf{v}',t'|\mathbf{v}_0,t_0)d\mathbf{x}'. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We will now derive the Fokker-Planck equation. +We start from the ESKC equation +

    +

     
    +$$ + W(\mathbf{x},t|\mathbf{x}_0,t_0) = \int_{-\infty}^{\infty} W(\mathbf{x},t|\mathbf{x}',t')W(\mathbf{x}',t'|\mathbf{x}_0,t_0)d\mathbf{x}'. +$$ +

     
    + +

    Define \( s=t'-t_0 \), \( \tau=t-t' \) and \( t-t_0=s+\tau \). We have then

    +

     
    +$$ + W(\mathbf{x},s+\tau|\mathbf{x}_0) = \int_{-\infty}^{\infty} W(\mathbf{x},\tau|\mathbf{x}')W(\mathbf{x}',s|\mathbf{x}_0)d\mathbf{x}'. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    Assume now that \( \tau \) is very small so that we can make an expansion in terms of a small step \( xi \), with \( \mathbf{x}'=\mathbf{x}-\xi \), that is

    +

     
    +$$ + W(\mathbf{x},s|\mathbf{x}_0)+\frac{\partial W}{\partial s}\tau +O(\tau^2) = \int_{-\infty}^{\infty} W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0)d\mathbf{x}'. +$$ +

     
    + +

    We assume that \( W(\mathbf{x},\tau|\mathbf{x}-\xi) \) takes non-negligible values only when \( \xi \) is small. This is just another way of stating the Master equation!!

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We say thus that \( \mathbf{x} \) changes only by a small amount in the time interval \( \tau \). +This means that we can make a Taylor expansion in terms of \( \xi \), that is we +expand +

    +

     
    +$$ +W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0) = +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n}\left[W(\mathbf{x}+\xi,\tau|\mathbf{x})W(\mathbf{x},s|\mathbf{x}_0) +\right]. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can then rewrite the ESKC equation as

    +

     
    +$$ +\frac{\partial W}{\partial s}\tau=-W(\mathbf{x},s|\mathbf{x}_0)+ +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi\right]. +$$ +

     
    + +

    We have neglected higher powers of \( \tau \) and have used that for \( n=0 \) +we get simply \( W(\mathbf{x},s|\mathbf{x}_0) \) due to normalization. +

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We say thus that \( \mathbf{x} \) changes only by a small amount in the time interval \( \tau \). +This means that we can make a Taylor expansion in terms of \( \xi \), that is we +expand +

    +

     
    +$$ +W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0) = +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n}\left[W(\mathbf{x}+\xi,\tau|\mathbf{x})W(\mathbf{x},s|\mathbf{x}_0) +\right]. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can then rewrite the ESKC equation as

    +

     
    +$$ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}\tau=-W(\mathbf{x},s|\mathbf{x}_0)+ +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi\right]. +$$ +

     
    + +

    We have neglected higher powers of \( \tau \) and have used that for \( n=0 \) +we get simply \( W(\mathbf{x},s|\mathbf{x}_0) \) due to normalization. +

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We simplify the above by introducing the moments

    +

     
    +$$ +M_n=\frac{1}{\tau}\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi= +\frac{\langle [\Delta x(\tau)]^n\rangle}{\tau}, +$$ +

     
    + +

    resulting in

    +

     
    +$$ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}= +\sum_{n=1}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)M_n\right]. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    When \( \tau \rightarrow 0 \) we assume that \( \langle [\Delta x(\tau)]^n\rangle \rightarrow 0 \) more rapidly than \( \tau \) itself if \( n > 2 \). +When \( \tau \) is much larger than the standard correlation time of +system then \( M_n \) for \( n > 2 \) can normally be neglected. +This means that fluctuations become negligible at large time scales. +

    + +

    If we neglect such terms we can rewrite the ESKC equation as

    +

     
    +$$ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}= +-\frac{\partial M_1W(\mathbf{x},s|\mathbf{x}_0)}{\partial x}+ +\frac{1}{2}\frac{\partial^2 M_2W(\mathbf{x},s|\mathbf{x}_0)}{\partial x^2}. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    In a more compact form we have

    +

     
    +$$ +\frac{\partial W}{\partial s}= +-\frac{\partial M_1W}{\partial x}+ +\frac{1}{2}\frac{\partial^2 M_2W}{\partial x^2}, +$$ +

     
    + +

    which is the Fokker-Planck equation! It is trivial to replace +position with velocity (momentum). +

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    Consider a particle suspended in a liquid. On its path through the liquid it will continuously collide with the liquid molecules. Because on average the particle will collide more often on the front side than on the back side, it will experience a systematic force proportional with its velocity, and directed opposite to its velocity. Besides this systematic force the particle will experience a stochastic force \( \mathbf{F}(t) \). +The equations of motion are +

    +
      +

    • \( \frac{d\mathbf{r}}{dt}=\mathbf{v} \) and
    • +

    • \( \frac{d\mathbf{v}}{dt}=-\xi \mathbf{v}+\mathbf{F} \).
    • +
    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    From hydrodynamics we know that the friction constant \( \xi \) is given by

    +

     
    +$$ +\xi =6\pi \eta a/m +$$ +

     
    + +

    where \( \eta \) is the viscosity of the solvent and a is the radius of the particle .

    + +

    Solving the second equation in the previous slide we get

    +

     
    +$$ +\mathbf{v}(t)=\mathbf{v}_{0}e^{-\xi t}+\int_{0}^{t}d\tau e^{-\xi (t-\tau )}\mathbf{F }(\tau ). +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    If we want to get some useful information out of this, we have to average over all possible realizations of +\( \mathbf{F}(t) \), with the initial velocity as a condition. A useful quantity for example is +

    +

     
    +$$ +\langle \mathbf{v}(t)\cdot \mathbf{v}(t)\rangle_{\mathbf{v}_{0}}=v_{0}^{-\xi 2t} ++2\int_{0}^{t}d\tau e^{-\xi (2t-\tau)}\mathbf{v}_{0}\cdot \langle \mathbf{F}(\tau )\rangle_{\mathbf{v}_{0}} +$$ +

     
    + +

     
    +$$ + +\int_{0}^{t}d\tau ^{\prime }\int_{0}^{t}d\tau e^{-\xi (2t-\tau -\tau ^{\prime })} +\langle \mathbf{F}(\tau )\cdot \mathbf{F}(\tau ^{\prime })\rangle_{ \mathbf{v}_{0}}. +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    In order to continue we have to make some assumptions about the conditional averages of the stochastic forces. +In view of the chaotic character of the stochastic forces the following +assumptions seem to be appropriate +

    +

     
    +$$ +\langle \mathbf{F}(t)\rangle=0, +$$ +

     
    + +

    and

    +

     
    +$$ +\langle \mathbf{F}(t)\cdot \mathbf{F}(t^{\prime })\rangle_{\mathbf{v}_{0}}= C_{\mathbf{v}_{0}}\delta (t-t^{\prime }). +$$ +

     
    + +

    We omit the subscript \( \mathbf{v}_{0} \), when the quantity of interest turns out to be independent of \( \mathbf{v}_{0} \). Using the last three equations we get

    +

     
    +$$ +\langle \mathbf{v}(t)\cdot \mathbf{v}(t)\rangle_{\mathbf{v}_{0}}=v_{0}^{2}e^{-2\xi t}+\frac{C_{\mathbf{v}_{0}}}{2\xi }(1-e^{-2\xi t}). +$$ +

     
    + +

    For large t this should be equal to 3kT/m, from which it follows that

    +

     
    +$$ +\langle \mathbf{F}(t)\cdot \mathbf{F}(t^{\prime })\rangle =6\frac{kT}{m}\xi \delta (t-t^{\prime }). +$$ +

     
    + +

    This result is called the fluctuation-dissipation theorem .

    +
    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    Integrating

    +

     
    +$$ +\mathbf{v}(t)=\mathbf{v}_{0}e^{-\xi t}+\int_{0}^{t}d\tau e^{-\xi (t-\tau )}\mathbf{F }(\tau ), +$$ +

     
    + +

    we get

    +

     
    +$$ +\mathbf{r}(t)=\mathbf{r}_{0}+\mathbf{v}_{0}\frac{1}{\xi }(1-e^{-\xi t})+ +\int_0^td\tau \int_0^{\tau}\tau ^{\prime } e^{-\xi (\tau -\tau ^{\prime })}\mathbf{F}(\tau ^{\prime }), +$$ +

     
    + +

    from which we calculate the mean square displacement

    +

     
    +$$ +\langle ( \mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle _{\mathbf{v}_{0}}=\frac{v_0^2}{\xi}(1-e^{-\xi t})^{2}+\frac{3kT}{m\xi ^{2}}(2\xi t-3+4e^{-\xi t}-e^{-2\xi t}). +$$ +

     
    +

    +
    + +
    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    For very large \( t \) this becomes

    +

     
    +$$ +\langle (\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle =\frac{6kT}{m\xi }t +$$ +

     
    + +

    from which we get the Einstein relation

    +

     
    +$$ +D= \frac{kT}{m\xi } +$$ +

     
    + +

    where we have used \( \langle (\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle =6Dt \).

    +
    +
    + diff --git a/doc/pub/week3/html/week3-solarized.html b/doc/pub/week3/html/week3-solarized.html index 4c2dbaaa..c785d65f 100644 --- a/doc/pub/week3/html/week3-solarized.html +++ b/doc/pub/week3/html/week3-solarized.html @@ -63,52 +63,150 @@ @@ -146,17 +244,17 @@

    Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importa
    -

    February 6-10

    +

    February 2












    -

    Overview of week 5

    +

    Overview of week 5, January 29-February 2

    Topics

    @@ -169,319 +267,10 @@

    Overview of week 5

  • Overview video on Metropolis algoritm
  • Video of lecture tba
  • Handwritten notes tba
  • -
  • See also Lectures from FYS3150/4150 on the Metropolis Algorithm
  • -









    -

    Basics of the Metropolis Algorithm

    - -

    The Metropolis et al. -algorithm was invented by Metropolis et. a -and is often simply called the Metropolis algorithm. -It is a method to sample a normalized probability -distribution by a stochastic process. We define \( {\cal P}_i^{(n)} \) to -be the probability for finding the system in the state \( i \) at step \( n \). -The algorithm is then -

    - -









    -

    The basic of the Metropolis Algorithm

    - - -

    We wish to derive the required properties of \( T \) and \( A \) such that -\( {\cal P}_i^{(n\rightarrow \infty)} \rightarrow p_i \) so that starting -from any distribution, the method converges to the correct distribution. -Note that the description here is for a discrete probability distribution. -Replacing probabilities \( p_i \) with expressions like \( p(x_i)dx_i \) will -take all of these over to the corresponding continuum expressions. -

    - -









    -

    More on the Metropolis

    - -

    The dynamical equation for \( {\cal P}_i^{(n)} \) can be written directly from -the description above. The probability of being in the state \( i \) at step \( n \) -is given by the probability of being in any state \( j \) at the previous step, -and making an accepted transition to \( i \) added to the probability of -being in the state \( i \), making a transition to any state \( j \) and -rejecting the move: -

    -$$ -\begin{equation} -\label{eq:eq1} -{\cal P}^{(n)}_i = \sum_j \left [ -{\cal P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i} -+{\cal P}^{(n-1)}_iT_{i\rightarrow j}\left ( 1- A_{i\rightarrow j} \right) -\right ] \,. -\end{equation} -$$ - - -









    -

    Metropolis algorithm, setting it up

    -

    Since the probability of making some transition must be 1, -\( \sum_j T_{i\rightarrow j} = 1 \), and Eq. \eqref{eq:eq1} becomes -

    - -$$ -\begin{equation} -{\cal P}^{(n)}_i = {\cal P}^{(n-1)}_i + - \sum_j \left [ -{\cal P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i} --{\cal P}^{(n-1)}_iT_{i\rightarrow j}A_{i\rightarrow j} -\right ] \,. -\label{_auto1} -\end{equation} -$$ - - -









    -

    Metropolis continues

    - -

    For large \( n \) we require that \( {\cal P}^{(n\rightarrow \infty)}_i = p_i \), -the desired probability distribution. Taking this limit, gives the -balance requirement -

    - -$$ -\begin{equation} -\sum_j \left [p_jT_{j\rightarrow i} A_{j\rightarrow i}-p_iT_{i\rightarrow j}A_{i\rightarrow j} -\right ] = 0, -\label{_auto2} -\end{equation} -$$ - - -









    -

    Detailed Balance

    - -

    The balance requirement is very weak. Typically the much stronger detailed -balance requirement is enforced, that is rather than the sum being -set to zero, we set each term separately to zero and use this -to determine the acceptance probabilities. Rearranging, the result is -

    - -$$ -\begin{equation} -\frac{ A_{j\rightarrow i}}{A_{i\rightarrow j}} -= \frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}} \,. -\label{_auto3} -\end{equation} -$$ - - -









    -

    More on Detailed Balance

    - -

    The Metropolis choice is to maximize the \( A \) values, that is

    - -$$ -\begin{equation} -A_{j \rightarrow i} = \min \left ( 1, -\frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}}\right ). -\label{_auto4} -\end{equation} -$$ - -

    Other choices are possible, but they all correspond to multilplying -\( A_{i\rightarrow j} \) and \( A_{j\rightarrow i} \) by the same constant -smaller than unity. The penalty function method uses just such -a factor to compensate for \( p_i \) that are evaluated stochastically -and are therefore noisy. -

    - -

    Having chosen the acceptance probabilities, we have guaranteed that -if the \( {\cal P}_i^{(n)} \) has equilibrated, that is if it is equal to \( p_i \), -it will remain equilibrated. Next we need to find the circumstances for -convergence to equilibrium. -

    - -









    -

    Dynamical Equation

    - -

    The dynamical equation can be written as

    - -$$ -\begin{equation} -{\cal P}^{(n)}_i = \sum_j M_{ij}{\cal P}^{(n-1)}_j -\label{_auto5} -\end{equation} -$$ - -

    with the matrix \( M \) given by

    - -$$ -\begin{equation} -M_{ij} = \delta_{ij}\left [ 1 -\sum_k T_{i\rightarrow k} A_{i \rightarrow k} -\right ] + T_{j\rightarrow i} A_{j\rightarrow i} \,. -\label{_auto6} -\end{equation} -$$ - -

    Summing over \( i \) shows that \( \sum_i M_{ij} = 1 \), and since -\( \sum_k T_{i\rightarrow k} = 1 \), and \( A_{i \rightarrow k} \leq 1 \), the -elements of the matrix satisfy \( M_{ij} \geq 0 \). The matrix \( M \) is therefore -a stochastic matrix. -

    - -









    -

    Interpreting the Metropolis Algorithm

    - -

    The Metropolis method is simply the power method for computing the -right eigenvector of \( M \) with the largest magnitude eigenvalue. -By construction, the correct probability distribution is a right eigenvector -with eigenvalue 1. Therefore, for the Metropolis method to converge -to this result, we must show that \( M \) has only one eigenvalue with this -magnitude, and all other eigenvalues are smaller. -

    - -

    Even a defective matrix has at least one left and right eigenvector for -each eigenvalue. An example of a defective matrix is -

    - -$$ -\begin{bmatrix} -0 & 1\\ -0 & 0 \\ -\end{bmatrix}, -$$ - -

    with two zero eigenvalues, only one right eigenvector

    - -$$ -\begin{bmatrix} -1 \\ -0\\ -\end{bmatrix} -$$ - -

    and only one left eigenvector \( (0\ 1) \).

    - -









    -

    Gershgorin bounds and Metropolis

    - -

    The Gershgorin bounds for the eigenvalues can be derived by multiplying on -the left with the eigenvector with the maximum and minimum eigenvalues, -

    - -$$ -\begin{align} -\sum_i \psi^{\rm max}_i M_{ij} =& \lambda_{\rm max} \psi^{\rm max}_j -\nonumber\\ -\sum_i \psi^{\rm min}_i M_{ij} =& \lambda_{\rm min} \psi^{\rm min}_j -\label{_auto7} -\end{align} -$$ - - -









    -

    Normalizing the Eigenvectors

    - -

    Next we choose the normalization of these eigenvectors so that the -largest element (or one of the equally largest elements) -has value 1. Let's call this element \( k \), and -we can therefore bound the magnitude of the other elements to be less -than or equal to 1. -This leads to the inequalities, using the property that \( M_{ij}\geq 0 \), -

    - -$$ -\begin{eqnarray} -\sum_i M_{ik} \leq \lambda_{\rm max} -\nonumber\\ -M_{kk}-\sum_{i \neq k} M_{ik} \geq \lambda_{\rm min} -\end{eqnarray} -$$ - -

    where the equality from the maximum -will occur only if the eigenvector takes the value 1 for all values of -\( i \) where \( M_{ik} \neq 0 \), and the equality for the minimum will -occur only if the eigenvector takes the value -1 for all values of \( i\neq k \) -where \( M_{ik} \neq 0 \). -

    - -









    -

    More Metropolis analysis

    - -

    That the maximum eigenvalue is 1 follows immediately from the property -that \( \sum_i M_{ik} = 1 \). Similarly the minimum eigenvalue can be -1, -but only if \( M_{kk} = 0 \) and the magnitude of all the other elements -\( \psi_i^{\rm min} \) of -the eigenvector that multiply nonzero elements \( M_{ik} \) are -1. -

    - -

    Let's first see what the properties of \( M \) must be -to eliminate any -1 eigenvalues. -To have a -1 eigenvalue, the left eigenvector must contain only \( \pm 1 \) -and \( 0 \) values. Taking in turn each \( \pm 1 \) value as the maximum, so that -it corresponds to the index \( k \), the nonzero \( M_{ik} \) values must -correspond to \( i \) index values of the eigenvector which have opposite -sign elements. That is, the \( M \) matrix must break up into sets of -states that always make transitions from set A to set B ... back to set A. -In particular, there can be no rejections of these moves in the cycle -since the -1 eigenvalue requires \( M_{kk}=0 \). To guarantee no eigenvalues -with eigenvalue -1, we simply have to make sure that there are no -cycles among states. Notice that this is generally trivial since such -cycles cannot have any rejections at any stage. An example of such -a cycle is sampling a noninteracting Ising spin. If the transition is -taken to flip the spin, and the energy difference is zero, the Boltzmann -factor will not change and the move will always be accepted. The system -will simply flip from up to down to up to down ad infinitum. Including -a rejection probability or using a heat bath algorithm -immediately fixes the problem. -

    - -









    -

    Final Considerations I

    - -

    Next we need to make sure that there is only one left eigenvector -with eigenvalue 1. To get an eigenvalue 1, the left eigenvector must be -constructed from only ones and zeroes. It is straightforward to -see that a vector made up of -ones and zeroes can only be an eigenvector with eigenvalue 1 if the -matrix element \( M_{ij} = 0 \) for all cases where \( \psi_i \neq \psi_j \). -That is we can choose an index \( i \) and take \( \psi_i = 1 \). -We require all elements \( \psi_j \) where \( M_{ij} \neq 0 \) to also have -the value \( 1 \). Continuing we then require all elements \( \psi_\ell \) $M_{j\ell}$ -to have value \( 1 \). Only if the matrix \( M \) can be put into block diagonal -form can there be more than one choice for the left eigenvector with -eigenvalue 1. We therefore require that the transition matrix not -be in block diagonal form. This simply means that we must choose -the transition probability so that we can get from any allowed state -to any other in a series of transitions. -

    - -









    -

    Final Considerations II

    - -

    Finally, we note that for a defective matrix, with more eigenvalues -than independent eigenvectors for eigenvalue 1, -the left and right -eigenvectors of eigenvalue 1 would be orthogonal. -Here the left eigenvector is all 1 -except for states that can never be reached, and the right eigenvector -is \( p_i > 0 \) except for states that give zero probability. We already -require that we can reach -all states that contribute to \( p_i \). Therefore the left and right -eigenvectors with eigenvalue 1 do not correspond to a defective sector -of the matrix and they are unique. The Metropolis algorithm therefore -converges exponentially to the desired distribution. -

    - -









    -

    Final Considerations III

    - -

    The requirements for the transition \( T_{i \rightarrow j} \) are

    -









    Importance Sampling: Overview of what needs to be coded

    @@ -656,7 +445,6 @@

    Code exa from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import sys -from numba import jit,njit # Trial wave function for the 2-electron quantum dot in two dims @@ -711,9 +499,6 @@

    Code exa
    # The Monte Carlo sampling with the Metropolis algo
    -# jit decorator tells Numba to compile this function.
    -# The argument types will be inferred by Numba when function is called.
    -@jit()
     def MonteCarloSampling():
     
         NumberMCcycles= 100000
    @@ -848,9 +633,897 @@ 

    Code exa

    +









    +

    Importance sampling, program elements

    +
    + +

    +

    The general derivative formula of the Jastrow factor is (the subscript \( C \) stands for Correlation)

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k} +$$ + +

    However, +with our written in way which can be reused later as +

    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})= \exp{\left\{\sum_{i < j}f(r_{ij})\right\}}, +$$ + +

    the gradient needed for the quantum force and local energy is easy to compute. +The function \( f(r_{ij}) \) will depends on the system under study. In the equations below we will keep this general form. +

    +
    + + +









    +

    Importance sampling, program elements

    +
    + +

    +

    In the Metropolis/Hasting algorithm, the acceptance ratio determines the probability for a particle to be accepted at a new position. The ratio of the trial wave functions evaluated at the new and current positions is given by (\( OB \) for the onebody part)

    +$$ +R \equiv \frac{\Psi_{T}^{new}}{\Psi_{T}^{old}} = +\frac{\Psi_{OB}^{new}}{\Psi_{OB}^{old}}\frac{\Psi_{C}^{new}}{\Psi_{C}^{old}} +$$ + +

    Here \( \Psi_{OB} \) is our onebody part (Slater determinant or product of boson single-particle states) while \( \Psi_{C} \) is our correlation function, or Jastrow factor. +We need to optimize the \( \nabla \Psi_T / \Psi_T \) ratio and the second derivative as well, that is +the \( \mathbf{\nabla}^2 \Psi_T/\Psi_T \) ratio. The first is needed when we compute the so-called quantum force in importance sampling. +The second is needed when we compute the kinetic energy term of the local energy. +

    +$$ +\frac{\mathbf{\mathbf{\nabla}} \Psi}{\Psi} = \frac{\mathbf{\nabla} (\Psi_{OB} \, \Psi_{C})}{\Psi_{OB} \, \Psi_{C}} = \frac{ \Psi_C \mathbf{\nabla} \Psi_{OB} + \Psi_{OB} \mathbf{\nabla} \Psi_{C}}{\Psi_{OB} \Psi_{C}} = \frac{\mathbf{\nabla} \Psi_{OB}}{\Psi_{OB}} + \frac{\mathbf{\nabla} \Psi_C}{ \Psi_C} +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    The expectation value of the kinetic energy expressed in atomic units for electron \( i \) is

    +$$ + \langle \hat{K}_i \rangle = -\frac{1}{2}\frac{\langle\Psi|\mathbf{\nabla}_{i}^2|\Psi \rangle}{\langle\Psi|\Psi \rangle}, +$$ + +$$ +\hat{K}_i = -\frac{1}{2}\frac{\mathbf{\nabla}_{i}^{2} \Psi}{\Psi}. +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    The second derivative which enters the definition of the local energy is

    +$$ +\frac{\mathbf{\nabla}^2 \Psi}{\Psi}=\frac{\mathbf{\nabla}^2 \Psi_{OB}}{\Psi_{OB}} + \frac{\mathbf{\nabla}^2 \Psi_C}{ \Psi_C} + 2 \frac{\mathbf{\nabla} \Psi_{OB}}{\Psi_{OB}}\cdot\frac{\mathbf{\nabla} \Psi_C}{ \Psi_C} +$$ + +

    We discuss here how to calculate these quantities in an optimal way,

    +
    + +









    +

    Importance sampling

    +
    + +

    +

    We have defined the correlated function as

    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})=\prod_{i < j}^Ng(r_{ij})= \prod_{i=1}^N\prod_{j=i+1}^Ng(r_{ij}), +$$ + +

    with +\( r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2+(z_i-z_j)^2} \) in three dimensions or +\( r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2} \) if we work with two-dimensional systems. +

    + +

    In our particular case we have

    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})=\exp{\left\{\sum_{i < j}f(r_{ij})\right\}}. +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    The total number of different relative distances \( r_{ij} \) is \( N(N-1)/2 \). In a matrix storage format, the relative distances form a strictly upper triangular matrix

    +$$ + \mathbf{r} \equiv \begin{pmatrix} + 0 & r_{1,2} & r_{1,3} & \cdots & r_{1,N} \\ + \vdots & 0 & r_{2,3} & \cdots & r_{2,N} \\ + \vdots & \vdots & 0 & \ddots & \vdots \\ + \vdots & \vdots & \vdots & \ddots & r_{N-1,N} \\ + 0 & 0 & 0 & \cdots & 0 + \end{pmatrix}. +$$ + +

    This applies to \( \mathbf{g} = \mathbf{g}(r_{ij}) \) as well.

    + +

    In our algorithm we will move one particle at the time, say the \( kth \)-particle. This sampling will be seen to be particularly efficient when we are going to compute a Slater determinant.

    +
    + +









    +

    Importance sampling

    +
    + +

    +

    We have that the ratio between Jastrow factors \( R_C \) is given by

    +$$ +R_{C} = \frac{\Psi_{C}^\mathrm{new}}{\Psi_{C}^\mathrm{cur}} = +\prod_{i=1}^{k-1}\frac{g_{ik}^\mathrm{new}}{g_{ik}^\mathrm{cur}} +\prod_{i=k+1}^{N}\frac{ g_{ki}^\mathrm{new}} {g_{ki}^\mathrm{cur}}. +$$ + +

    For the Pade-Jastrow form

    +$$ + R_{C} = \frac{\Psi_{C}^\mathrm{new}}{\Psi_{C}^\mathrm{cur}} = +\frac{\exp{U_{new}}}{\exp{U_{cur}}} = \exp{\Delta U}, +$$ + +

    where

    +$$ +\Delta U = +\sum_{i=1}^{k-1}\big(f_{ik}^\mathrm{new}-f_{ik}^\mathrm{cur}\big) ++ +\sum_{i=k+1}^{N}\big(f_{ki}^\mathrm{new}-f_{ki}^\mathrm{cur}\big) +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    One needs to develop a special algorithm +that runs only through the elements of the upper triangular +matrix \( \mathbf{g} \) and have \( k \) as an index. +

    + +

    The expression to be derived in the following is of interest when computing the quantum force and the kinetic energy. It has the form

    +$$ +\frac{\mathbf{\nabla}_i\Psi_C}{\Psi_C} = \frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_i}, +$$ + +

    for all dimensions and with \( i \) running over all particles.

    +
    + +









    +

    Importance sampling

    +
    + +

    +

    For the first derivative only \( N-1 \) terms survive the ratio because the \( g \)-terms that are not differentiated cancel with their corresponding ones in the denominator. Then,

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\partial g_{ki}}{\partial x_k}. +$$ + +

    An equivalent equation is obtained for the exponential form after replacing \( g_{ij} \) by \( \exp(f_{ij}) \), yielding:

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k}, +$$ + +

    with both expressions scaling as \( \mathcal{O}(N) \).

    +
    + + +









    +

    Importance sampling

    +
    + +

    + +

    Using the identity

    +$$ +\frac{\partial}{\partial x_i}g_{ij} = -\frac{\partial}{\partial x_j}g_{ij}, +$$ + +

    we get expressions where all the derivatives acting on the particle are represented by the second index of \( g \):

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}}\frac{\partial g_{ik}}{\partial x_k} +-\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\partial g_{ki}}{\partial x_i}, +$$ + +

    and for the exponential case:

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} +-\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i}. +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    For correlation forms depending only on the scalar distances \( r_{ij} \) we can use the chain rule. Noting that

    +$$ +\frac{\partial g_{ij}}{\partial x_j} = \frac{\partial g_{ij}}{\partial r_{ij}} \frac{\partial r_{ij}}{\partial x_j} = \frac{x_j - x_i}{r_{ij}} \frac{\partial g_{ij}}{\partial r_{ij}}, +$$ + +

    we arrive at

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}} \frac{\mathbf{r_{ik}}}{r_{ik}} \frac{\partial g_{ik}}{\partial r_{ik}} +-\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\mathbf{r_{ki}}}{r_{ki}}\frac{\partial g_{ki}}{\partial r_{ki}}. +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    Note that for the Pade-Jastrow form we can set \( g_{ij} \equiv g(r_{ij}) = e^{f(r_{ij})} = e^{f_{ij}} \) and

    +$$ +\frac{\partial g_{ij}}{\partial r_{ij}} = g_{ij} \frac{\partial f_{ij}}{\partial r_{ij}}. +$$ + +

    Therefore,

    +$$ +\frac{1}{\Psi_{C}}\frac{\partial \Psi_{C}}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\mathbf{r_{ik}}}{r_{ik}}\frac{\partial f_{ik}}{\partial r_{ik}} +-\sum_{i=k+1}^{N}\frac{\mathbf{r_{ki}}}{r_{ki}}\frac{\partial f_{ki}}{\partial r_{ki}}, +$$ + +

    where

    +$$ + \mathbf{r}_{ij} = |\mathbf{r}_j - \mathbf{r}_i| = (x_j - x_i)\mathbf{e}_1 + (y_j - y_i)\mathbf{e}_2 + (z_j - z_i)\mathbf{e}_3 +$$ + +

    is the relative distance.

    +
    + + +









    +

    Importance sampling

    +
    + +

    +

    The second derivative of the Jastrow factor divided by the Jastrow factor (the way it enters the kinetic energy) is

    +$$ +\left[\frac{\mathbf{\nabla}^2 \Psi_C}{\Psi_C}\right]_x =\ +2\sum_{k=1}^{N} +\sum_{i=1}^{k-1}\frac{\partial^2 g_{ik}}{\partial x_k^2}\ +\ +\sum_{k=1}^N +\left( +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} - +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i} +\right)^2 +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    But we have a simple form for the function, namely

    +$$ +\Psi_{C}=\prod_{i < j}\exp{f(r_{ij})}, +$$ + +

    and it is easy to see that for particle \( k \) +we have +

    +$$ + \frac{\mathbf{\nabla}^2_k \Psi_C}{\Psi_C }= +\sum_{ij\ne k}\frac{(\mathbf{r}_k-\mathbf{r}_i)(\mathbf{r}_k-\mathbf{r}_j)}{r_{ki}r_{kj}}f'(r_{ki})f'(r_{kj})+ +\sum_{j\ne k}\left( f''(r_{kj})+\frac{2}{r_{kj}}f'(r_{kj})\right) +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    A stochastic process is simply a function of two variables, one is the time, +the other is a stochastic variable \( X \), defined by specifying +

    +
      +
    • the set \( \left\{x\right\} \) of possible values for \( X \);
    • +
    • the probability distribution, \( w_X(x) \), over this set, or briefly \( w(x) \)
    • +
    +

    The set of values \( \left\{x\right\} \) for \( X \) +may be discrete, or continuous. If the set of +values is continuous, then \( w_X (x) \) is a probability density so that +\( w_X (x)dx \) +is the probability that one finds the stochastic variable \( X \) to have values +in the range \( [x, x + dx] \) . +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    An arbitrary number of other stochastic variables may be derived from +\( X \). For example, any \( Y \) given by a mapping of \( X \), is also a stochastic +variable. The mapping may also be time-dependent, that is, the mapping +depends on an additional variable \( t \) +

    +$$ + Y_X (t) = f (X, t) . +$$ + +

    The quantity \( Y_X (t) \) is called a random function, or, since \( t \) often is time, +a stochastic process. A stochastic process is a function of two variables, +one is the time, the other is a stochastic variable \( X \). Let \( x \) be one of the +possible values of \( X \) then +

    +$$ + y(t) = f (x, t), +$$ + +

    is a function of \( t \), called a sample function or realization of the process. +In physics one considers the stochastic process to be an ensemble of such +sample functions. +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    For many physical systems initial distributions of a stochastic +variable \( y \) tend to equilibrium distributions: \( w(y, t)\rightarrow w_0(y) \) +as \( t\rightarrow\infty \). In +equilibrium detailed balance constrains the transition rates +

    +$$ + W(y\rightarrow y')w(y ) = W(y'\rightarrow y)w_0 (y), +$$ + +

    where \( W(y'\rightarrow y) \) +is the probability, per unit time, that the system changes +from a state \( |y\rangle \) , characterized by the value \( y \) +for the stochastic variable \( Y \) , to a state \( |y'\rangle \). +

    + +

    Note that for a system in equilibrium the transition rate +\( W(y'\rightarrow y) \) and +the reverse \( W(y\rightarrow y') \) may be very different. +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    Consider, for instance, a simple +system that has only two energy levels \( \epsilon_0 = 0 \) and +\( \epsilon_1 = \Delta E \). +

    + +

    For a system governed by the Boltzmann distribution we find (the partition function has been taken out)

    +$$ + W(0\rightarrow 1)\exp{-(\epsilon_0/kT)} = W(1\rightarrow 0)\exp{-(\epsilon_1/kT)} +$$ + +

    We get then

    +$$ + \frac{W(1\rightarrow 0)}{W(0 \rightarrow 1)}=\exp{-(\Delta E/kT)}, +$$ + +

    which goes to zero when \( T \) tends to zero.

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    If we assume a discrete set of events, +our initial probability +distribution function can be given by +

    +$$ + w_i(0) = \delta_{i,0}, +$$ + +

    and its time-development after a given time step \( \Delta t=\epsilon \) is

    +$$ + w_i(t) = \sum_{j}W(j\rightarrow i)w_j(t=0). +$$ + +

    The continuous analog to \( w_i(0) \) is

    +$$ + w(\mathbf{x})\rightarrow \delta(\mathbf{x}), +$$ + +

    where we now have generalized the one-dimensional position \( x \) to a generic-dimensional +vector \( \mathbf{x} \). The Kroenecker \( \delta \) function is replaced by the \( \delta \) distribution +function \( \delta(\mathbf{x}) \) at \( t=0 \). +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    The transition from a state \( j \) to a state \( i \) is now replaced by a transition +to a state with position \( \mathbf{y} \) from a state with position \( \mathbf{x} \). +The discrete sum of transition probabilities can then be replaced by an integral +and we obtain the new distribution at a time \( t+\Delta t \) as +

    +$$ + w(\mathbf{y},t+\Delta t)= \int W(\mathbf{y},t+\Delta t| \mathbf{x},t)w(\mathbf{x},t)d\mathbf{x}, +$$ + +

    and after \( m \) time steps we have

    +$$ + w(\mathbf{y},t+m\Delta t)= \int W(\mathbf{y},t+m\Delta t| \mathbf{x},t)w(\mathbf{x},t)d\mathbf{x}. +$$ + +

    When equilibrium is reached we have

    +$$ + w(\mathbf{y})= \int W(\mathbf{y}|\mathbf{x}, t)w(\mathbf{x})d\mathbf{x}, +$$ + +

    that is no time-dependence. Note our change of notation for \( W \)

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can solve the equation for \( w(\mathbf{y},t) \) by making a Fourier transform to +momentum space. +The PDF \( w(\mathbf{x},t) \) is related to its Fourier transform +\( \tilde{w}(\mathbf{k},t) \) through +

    +$$ + w(\mathbf{x},t) = \int_{-\infty}^{\infty}d\mathbf{k} \exp{(i\mathbf{kx})}\tilde{w}(\mathbf{k},t), +$$ + +

    and using the definition of the +\( \delta \)-function +

    +$$ + \delta(\mathbf{x}) = \frac{1}{2\pi} \int_{-\infty}^{\infty}d\mathbf{k} \exp{(i\mathbf{kx})}, +$$ + +

    we see that

    +$$ + \tilde{w}(\mathbf{k},0)=1/2\pi. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can then use the Fourier-transformed diffusion equation

    +$$ + \frac{\partial \tilde{w}(\mathbf{k},t)}{\partial t} = -D\mathbf{k}^2\tilde{w}(\mathbf{k},t), +$$ + +

    with the obvious solution

    +$$ + \tilde{w}(\mathbf{k},t)=\tilde{w}(\mathbf{k},0)\exp{\left[-(D\mathbf{k}^2t)\right)}= + \frac{1}{2\pi}\exp{\left[-(D\mathbf{k}^2t)\right]}. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    With the Fourier transform we obtain

    +$$ + w(\mathbf{x},t)=\int_{-\infty}^{\infty}d\mathbf{k} \exp{\left[i\mathbf{kx}\right]}\frac{1}{2\pi}\exp{\left[-(D\mathbf{k}^2t)\right]}= + \frac{1}{\sqrt{4\pi Dt}}\exp{\left[-(\mathbf{x}^2/4Dt)\right]}, +$$ + +

    with the normalization condition

    +$$ + \int_{-\infty}^{\infty}w(\mathbf{x},t)d\mathbf{x}=1. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    The solution represents the probability of finding +our random walker at position \( \mathbf{x} \) at time \( t \) if the initial distribution +was placed at \( \mathbf{x}=0 \) at \( t=0 \). +

    + +

    There is another interesting feature worth observing. The discrete transition probability \( W \) +itself is given by a binomial distribution. +The results from the central limit theorem state that +transition probability in the limit \( n\rightarrow \infty \) converges to the normal +distribution. It is then possible to show that +

    +$$ + W(il-jl,n\epsilon)\rightarrow W(\mathbf{y},t+\Delta t|\mathbf{x},t)= + \frac{1}{\sqrt{4\pi D\Delta t}}\exp{\left[-((\mathbf{y}-\mathbf{x})^2/4D\Delta t)\right]}, +$$ + +

    and that it satisfies the normalization condition and is itself a solution +to the diffusion equation. +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    Let us now assume that we have three PDFs for times \( t_0 < t' < t \), that is +\( w(\mathbf{x}_0,t_0) \), \( w(\mathbf{x}',t') \) and \( w(\mathbf{x},t) \). +We have then +

    +$$ + w(\mathbf{x},t)= \int_{-\infty}^{\infty} W(\mathbf{x}.t|\mathbf{x}'.t')w(\mathbf{x}',t')d\mathbf{x}', +$$ + +

    and

    +$$ + w(\mathbf{x},t)= \int_{-\infty}^{\infty} W(\mathbf{x}.t|\mathbf{x}_0.t_0)w(\mathbf{x}_0,t_0)d\mathbf{x}_0, +$$ + +

    and

    +$$ + w(\mathbf{x}',t')= \int_{-\infty}^{\infty} W(\mathbf{x}'.t'|\mathbf{x}_0,t_0)w(\mathbf{x}_0,t_0)d\mathbf{x}_0. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can combine these equations and arrive at the famous Einstein-Smoluchenski-Kolmogorov-Chapman (ESKC) relation

    +$$ + W(\mathbf{x}t|\mathbf{x}_0t_0) = \int_{-\infty}^{\infty} W(\mathbf{x},t|\mathbf{x}',t')W(\mathbf{x}',t'|\mathbf{x}_0,t_0)d\mathbf{x}'. +$$ + +

    We can replace the spatial dependence with a dependence upon say the velocity +(or momentum), that is we have +

    +$$ + W(\mathbf{v},t|\mathbf{v}_0,t_0) = \int_{-\infty}^{\infty} W(\mathbf{v},t|\mathbf{v}',t')W(\mathbf{v}',t'|\mathbf{v}_0,t_0)d\mathbf{x}'. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We will now derive the Fokker-Planck equation. +We start from the ESKC equation +

    +$$ + W(\mathbf{x},t|\mathbf{x}_0,t_0) = \int_{-\infty}^{\infty} W(\mathbf{x},t|\mathbf{x}',t')W(\mathbf{x}',t'|\mathbf{x}_0,t_0)d\mathbf{x}'. +$$ + +

    Define \( s=t'-t_0 \), \( \tau=t-t' \) and \( t-t_0=s+\tau \). We have then

    +$$ + W(\mathbf{x},s+\tau|\mathbf{x}_0) = \int_{-\infty}^{\infty} W(\mathbf{x},\tau|\mathbf{x}')W(\mathbf{x}',s|\mathbf{x}_0)d\mathbf{x}'. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    Assume now that \( \tau \) is very small so that we can make an expansion in terms of a small step \( xi \), with \( \mathbf{x}'=\mathbf{x}-\xi \), that is

    +$$ + W(\mathbf{x},s|\mathbf{x}_0)+\frac{\partial W}{\partial s}\tau +O(\tau^2) = \int_{-\infty}^{\infty} W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0)d\mathbf{x}'. +$$ + +

    We assume that \( W(\mathbf{x},\tau|\mathbf{x}-\xi) \) takes non-negligible values only when \( \xi \) is small. This is just another way of stating the Master equation!!

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We say thus that \( \mathbf{x} \) changes only by a small amount in the time interval \( \tau \). +This means that we can make a Taylor expansion in terms of \( \xi \), that is we +expand +

    +$$ +W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0) = +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n}\left[W(\mathbf{x}+\xi,\tau|\mathbf{x})W(\mathbf{x},s|\mathbf{x}_0) +\right]. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can then rewrite the ESKC equation as

    +$$ +\frac{\partial W}{\partial s}\tau=-W(\mathbf{x},s|\mathbf{x}_0)+ +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi\right]. +$$ + +

    We have neglected higher powers of \( \tau \) and have used that for \( n=0 \) +we get simply \( W(\mathbf{x},s|\mathbf{x}_0) \) due to normalization. +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We say thus that \( \mathbf{x} \) changes only by a small amount in the time interval \( \tau \). +This means that we can make a Taylor expansion in terms of \( \xi \), that is we +expand +

    +$$ +W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0) = +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n}\left[W(\mathbf{x}+\xi,\tau|\mathbf{x})W(\mathbf{x},s|\mathbf{x}_0) +\right]. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can then rewrite the ESKC equation as

    +$$ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}\tau=-W(\mathbf{x},s|\mathbf{x}_0)+ +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi\right]. +$$ + +

    We have neglected higher powers of \( \tau \) and have used that for \( n=0 \) +we get simply \( W(\mathbf{x},s|\mathbf{x}_0) \) due to normalization. +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We simplify the above by introducing the moments

    +$$ +M_n=\frac{1}{\tau}\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi= +\frac{\langle [\Delta x(\tau)]^n\rangle}{\tau}, +$$ + +

    resulting in

    +$$ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}= +\sum_{n=1}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)M_n\right]. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    When \( \tau \rightarrow 0 \) we assume that \( \langle [\Delta x(\tau)]^n\rangle \rightarrow 0 \) more rapidly than \( \tau \) itself if \( n > 2 \). +When \( \tau \) is much larger than the standard correlation time of +system then \( M_n \) for \( n > 2 \) can normally be neglected. +This means that fluctuations become negligible at large time scales. +

    + +

    If we neglect such terms we can rewrite the ESKC equation as

    +$$ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}= +-\frac{\partial M_1W(\mathbf{x},s|\mathbf{x}_0)}{\partial x}+ +\frac{1}{2}\frac{\partial^2 M_2W(\mathbf{x},s|\mathbf{x}_0)}{\partial x^2}. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    In a more compact form we have

    +$$ +\frac{\partial W}{\partial s}= +-\frac{\partial M_1W}{\partial x}+ +\frac{1}{2}\frac{\partial^2 M_2W}{\partial x^2}, +$$ + +

    which is the Fokker-Planck equation! It is trivial to replace +position with velocity (momentum). +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    Consider a particle suspended in a liquid. On its path through the liquid it will continuously collide with the liquid molecules. Because on average the particle will collide more often on the front side than on the back side, it will experience a systematic force proportional with its velocity, and directed opposite to its velocity. Besides this systematic force the particle will experience a stochastic force \( \mathbf{F}(t) \). +The equations of motion are +

    +
      +
    • \( \frac{d\mathbf{r}}{dt}=\mathbf{v} \) and
    • +
    • \( \frac{d\mathbf{v}}{dt}=-\xi \mathbf{v}+\mathbf{F} \).
    • +
    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    From hydrodynamics we know that the friction constant \( \xi \) is given by

    +$$ +\xi =6\pi \eta a/m +$$ + +

    where \( \eta \) is the viscosity of the solvent and a is the radius of the particle .

    + +

    Solving the second equation in the previous slide we get

    +$$ +\mathbf{v}(t)=\mathbf{v}_{0}e^{-\xi t}+\int_{0}^{t}d\tau e^{-\xi (t-\tau )}\mathbf{F }(\tau ). +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    If we want to get some useful information out of this, we have to average over all possible realizations of +\( \mathbf{F}(t) \), with the initial velocity as a condition. A useful quantity for example is +

    +$$ +\langle \mathbf{v}(t)\cdot \mathbf{v}(t)\rangle_{\mathbf{v}_{0}}=v_{0}^{-\xi 2t} ++2\int_{0}^{t}d\tau e^{-\xi (2t-\tau)}\mathbf{v}_{0}\cdot \langle \mathbf{F}(\tau )\rangle_{\mathbf{v}_{0}} +$$ + +$$ + +\int_{0}^{t}d\tau ^{\prime }\int_{0}^{t}d\tau e^{-\xi (2t-\tau -\tau ^{\prime })} +\langle \mathbf{F}(\tau )\cdot \mathbf{F}(\tau ^{\prime })\rangle_{ \mathbf{v}_{0}}. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    In order to continue we have to make some assumptions about the conditional averages of the stochastic forces. +In view of the chaotic character of the stochastic forces the following +assumptions seem to be appropriate +

    +$$ +\langle \mathbf{F}(t)\rangle=0, +$$ + +

    and

    +$$ +\langle \mathbf{F}(t)\cdot \mathbf{F}(t^{\prime })\rangle_{\mathbf{v}_{0}}= C_{\mathbf{v}_{0}}\delta (t-t^{\prime }). +$$ + +

    We omit the subscript \( \mathbf{v}_{0} \), when the quantity of interest turns out to be independent of \( \mathbf{v}_{0} \). Using the last three equations we get

    +$$ +\langle \mathbf{v}(t)\cdot \mathbf{v}(t)\rangle_{\mathbf{v}_{0}}=v_{0}^{2}e^{-2\xi t}+\frac{C_{\mathbf{v}_{0}}}{2\xi }(1-e^{-2\xi t}). +$$ + +

    For large t this should be equal to 3kT/m, from which it follows that

    +$$ +\langle \mathbf{F}(t)\cdot \mathbf{F}(t^{\prime })\rangle =6\frac{kT}{m}\xi \delta (t-t^{\prime }). +$$ + +

    This result is called the fluctuation-dissipation theorem .

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    Integrating

    +$$ +\mathbf{v}(t)=\mathbf{v}_{0}e^{-\xi t}+\int_{0}^{t}d\tau e^{-\xi (t-\tau )}\mathbf{F }(\tau ), +$$ + +

    we get

    +$$ +\mathbf{r}(t)=\mathbf{r}_{0}+\mathbf{v}_{0}\frac{1}{\xi }(1-e^{-\xi t})+ +\int_0^td\tau \int_0^{\tau}\tau ^{\prime } e^{-\xi (\tau -\tau ^{\prime })}\mathbf{F}(\tau ^{\prime }), +$$ + +

    from which we calculate the mean square displacement

    +$$ +\langle ( \mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle _{\mathbf{v}_{0}}=\frac{v_0^2}{\xi}(1-e^{-\xi t})^{2}+\frac{3kT}{m\xi ^{2}}(2\xi t-3+4e^{-\xi t}-e^{-2\xi t}). +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    For very large \( t \) this becomes

    +$$ +\langle (\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle =\frac{6kT}{m\xi }t +$$ + +

    from which we get the Einstein relation

    +$$ +D= \frac{kT}{m\xi } +$$ + +

    where we have used \( \langle (\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle =6Dt \).

    +
    + +
    - © 1999-2023, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license + © 1999-2024, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license
    diff --git a/doc/pub/week3/html/week3.html b/doc/pub/week3/html/week3.html index 07af4caa..8277d863 100644 --- a/doc/pub/week3/html/week3.html +++ b/doc/pub/week3/html/week3.html @@ -140,52 +140,150 @@ @@ -223,17 +321,17 @@

    Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importa
    -

    February 6-10

    +

    February 2












    -

    Overview of week 5

    +

    Overview of week 5, January 29-February 2

    Topics

      -
    • Markov Chain Monte Carlo
    • +
    • Markov Chain Monte Carlo and repetition from last week
    • Metropolis-Hastings sampling and Importance Sampling
    @@ -246,319 +344,10 @@

    Overview of week 5

  • Overview video on Metropolis algoritm
  • Video of lecture tba
  • Handwritten notes tba
  • -
  • See also Lectures from FYS3150/4150 on the Metropolis Algorithm
  • -









    -

    Basics of the Metropolis Algorithm

    - -

    The Metropolis et al. -algorithm was invented by Metropolis et. a -and is often simply called the Metropolis algorithm. -It is a method to sample a normalized probability -distribution by a stochastic process. We define \( {\cal P}_i^{(n)} \) to -be the probability for finding the system in the state \( i \) at step \( n \). -The algorithm is then -

    - -









    -

    The basic of the Metropolis Algorithm

    - - -

    We wish to derive the required properties of \( T \) and \( A \) such that -\( {\cal P}_i^{(n\rightarrow \infty)} \rightarrow p_i \) so that starting -from any distribution, the method converges to the correct distribution. -Note that the description here is for a discrete probability distribution. -Replacing probabilities \( p_i \) with expressions like \( p(x_i)dx_i \) will -take all of these over to the corresponding continuum expressions. -

    - -









    -

    More on the Metropolis

    - -

    The dynamical equation for \( {\cal P}_i^{(n)} \) can be written directly from -the description above. The probability of being in the state \( i \) at step \( n \) -is given by the probability of being in any state \( j \) at the previous step, -and making an accepted transition to \( i \) added to the probability of -being in the state \( i \), making a transition to any state \( j \) and -rejecting the move: -

    -$$ -\begin{equation} -\label{eq:eq1} -{\cal P}^{(n)}_i = \sum_j \left [ -{\cal P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i} -+{\cal P}^{(n-1)}_iT_{i\rightarrow j}\left ( 1- A_{i\rightarrow j} \right) -\right ] \,. -\end{equation} -$$ - - -









    -

    Metropolis algorithm, setting it up

    -

    Since the probability of making some transition must be 1, -\( \sum_j T_{i\rightarrow j} = 1 \), and Eq. \eqref{eq:eq1} becomes -

    - -$$ -\begin{equation} -{\cal P}^{(n)}_i = {\cal P}^{(n-1)}_i + - \sum_j \left [ -{\cal P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i} --{\cal P}^{(n-1)}_iT_{i\rightarrow j}A_{i\rightarrow j} -\right ] \,. -\label{_auto1} -\end{equation} -$$ - - -









    -

    Metropolis continues

    - -

    For large \( n \) we require that \( {\cal P}^{(n\rightarrow \infty)}_i = p_i \), -the desired probability distribution. Taking this limit, gives the -balance requirement -

    - -$$ -\begin{equation} -\sum_j \left [p_jT_{j\rightarrow i} A_{j\rightarrow i}-p_iT_{i\rightarrow j}A_{i\rightarrow j} -\right ] = 0, -\label{_auto2} -\end{equation} -$$ - - -









    -

    Detailed Balance

    - -

    The balance requirement is very weak. Typically the much stronger detailed -balance requirement is enforced, that is rather than the sum being -set to zero, we set each term separately to zero and use this -to determine the acceptance probabilities. Rearranging, the result is -

    - -$$ -\begin{equation} -\frac{ A_{j\rightarrow i}}{A_{i\rightarrow j}} -= \frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}} \,. -\label{_auto3} -\end{equation} -$$ - - -









    -

    More on Detailed Balance

    - -

    The Metropolis choice is to maximize the \( A \) values, that is

    - -$$ -\begin{equation} -A_{j \rightarrow i} = \min \left ( 1, -\frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}}\right ). -\label{_auto4} -\end{equation} -$$ - -

    Other choices are possible, but they all correspond to multilplying -\( A_{i\rightarrow j} \) and \( A_{j\rightarrow i} \) by the same constant -smaller than unity. The penalty function method uses just such -a factor to compensate for \( p_i \) that are evaluated stochastically -and are therefore noisy. -

    - -

    Having chosen the acceptance probabilities, we have guaranteed that -if the \( {\cal P}_i^{(n)} \) has equilibrated, that is if it is equal to \( p_i \), -it will remain equilibrated. Next we need to find the circumstances for -convergence to equilibrium. -

    - -









    -

    Dynamical Equation

    - -

    The dynamical equation can be written as

    - -$$ -\begin{equation} -{\cal P}^{(n)}_i = \sum_j M_{ij}{\cal P}^{(n-1)}_j -\label{_auto5} -\end{equation} -$$ - -

    with the matrix \( M \) given by

    - -$$ -\begin{equation} -M_{ij} = \delta_{ij}\left [ 1 -\sum_k T_{i\rightarrow k} A_{i \rightarrow k} -\right ] + T_{j\rightarrow i} A_{j\rightarrow i} \,. -\label{_auto6} -\end{equation} -$$ - -

    Summing over \( i \) shows that \( \sum_i M_{ij} = 1 \), and since -\( \sum_k T_{i\rightarrow k} = 1 \), and \( A_{i \rightarrow k} \leq 1 \), the -elements of the matrix satisfy \( M_{ij} \geq 0 \). The matrix \( M \) is therefore -a stochastic matrix. -

    - -









    -

    Interpreting the Metropolis Algorithm

    - -

    The Metropolis method is simply the power method for computing the -right eigenvector of \( M \) with the largest magnitude eigenvalue. -By construction, the correct probability distribution is a right eigenvector -with eigenvalue 1. Therefore, for the Metropolis method to converge -to this result, we must show that \( M \) has only one eigenvalue with this -magnitude, and all other eigenvalues are smaller. -

    - -

    Even a defective matrix has at least one left and right eigenvector for -each eigenvalue. An example of a defective matrix is -

    - -$$ -\begin{bmatrix} -0 & 1\\ -0 & 0 \\ -\end{bmatrix}, -$$ - -

    with two zero eigenvalues, only one right eigenvector

    - -$$ -\begin{bmatrix} -1 \\ -0\\ -\end{bmatrix} -$$ - -

    and only one left eigenvector \( (0\ 1) \).

    - -









    -

    Gershgorin bounds and Metropolis

    - -

    The Gershgorin bounds for the eigenvalues can be derived by multiplying on -the left with the eigenvector with the maximum and minimum eigenvalues, -

    - -$$ -\begin{align} -\sum_i \psi^{\rm max}_i M_{ij} =& \lambda_{\rm max} \psi^{\rm max}_j -\nonumber\\ -\sum_i \psi^{\rm min}_i M_{ij} =& \lambda_{\rm min} \psi^{\rm min}_j -\label{_auto7} -\end{align} -$$ - - -









    -

    Normalizing the Eigenvectors

    - -

    Next we choose the normalization of these eigenvectors so that the -largest element (or one of the equally largest elements) -has value 1. Let's call this element \( k \), and -we can therefore bound the magnitude of the other elements to be less -than or equal to 1. -This leads to the inequalities, using the property that \( M_{ij}\geq 0 \), -

    - -$$ -\begin{eqnarray} -\sum_i M_{ik} \leq \lambda_{\rm max} -\nonumber\\ -M_{kk}-\sum_{i \neq k} M_{ik} \geq \lambda_{\rm min} -\end{eqnarray} -$$ - -

    where the equality from the maximum -will occur only if the eigenvector takes the value 1 for all values of -\( i \) where \( M_{ik} \neq 0 \), and the equality for the minimum will -occur only if the eigenvector takes the value -1 for all values of \( i\neq k \) -where \( M_{ik} \neq 0 \). -

    - -









    -

    More Metropolis analysis

    - -

    That the maximum eigenvalue is 1 follows immediately from the property -that \( \sum_i M_{ik} = 1 \). Similarly the minimum eigenvalue can be -1, -but only if \( M_{kk} = 0 \) and the magnitude of all the other elements -\( \psi_i^{\rm min} \) of -the eigenvector that multiply nonzero elements \( M_{ik} \) are -1. -

    - -

    Let's first see what the properties of \( M \) must be -to eliminate any -1 eigenvalues. -To have a -1 eigenvalue, the left eigenvector must contain only \( \pm 1 \) -and \( 0 \) values. Taking in turn each \( \pm 1 \) value as the maximum, so that -it corresponds to the index \( k \), the nonzero \( M_{ik} \) values must -correspond to \( i \) index values of the eigenvector which have opposite -sign elements. That is, the \( M \) matrix must break up into sets of -states that always make transitions from set A to set B ... back to set A. -In particular, there can be no rejections of these moves in the cycle -since the -1 eigenvalue requires \( M_{kk}=0 \). To guarantee no eigenvalues -with eigenvalue -1, we simply have to make sure that there are no -cycles among states. Notice that this is generally trivial since such -cycles cannot have any rejections at any stage. An example of such -a cycle is sampling a noninteracting Ising spin. If the transition is -taken to flip the spin, and the energy difference is zero, the Boltzmann -factor will not change and the move will always be accepted. The system -will simply flip from up to down to up to down ad infinitum. Including -a rejection probability or using a heat bath algorithm -immediately fixes the problem. -

    - -









    -

    Final Considerations I

    - -

    Next we need to make sure that there is only one left eigenvector -with eigenvalue 1. To get an eigenvalue 1, the left eigenvector must be -constructed from only ones and zeroes. It is straightforward to -see that a vector made up of -ones and zeroes can only be an eigenvector with eigenvalue 1 if the -matrix element \( M_{ij} = 0 \) for all cases where \( \psi_i \neq \psi_j \). -That is we can choose an index \( i \) and take \( \psi_i = 1 \). -We require all elements \( \psi_j \) where \( M_{ij} \neq 0 \) to also have -the value \( 1 \). Continuing we then require all elements \( \psi_\ell \) $M_{j\ell}$ -to have value \( 1 \). Only if the matrix \( M \) can be put into block diagonal -form can there be more than one choice for the left eigenvector with -eigenvalue 1. We therefore require that the transition matrix not -be in block diagonal form. This simply means that we must choose -the transition probability so that we can get from any allowed state -to any other in a series of transitions. -

    - -









    -

    Final Considerations II

    - -

    Finally, we note that for a defective matrix, with more eigenvalues -than independent eigenvectors for eigenvalue 1, -the left and right -eigenvectors of eigenvalue 1 would be orthogonal. -Here the left eigenvector is all 1 -except for states that can never be reached, and the right eigenvector -is \( p_i > 0 \) except for states that give zero probability. We already -require that we can reach -all states that contribute to \( p_i \). Therefore the left and right -eigenvectors with eigenvalue 1 do not correspond to a defective sector -of the matrix and they are unique. The Metropolis algorithm therefore -converges exponentially to the desired distribution. -

    - -









    -

    Final Considerations III

    - -

    The requirements for the transition \( T_{i \rightarrow j} \) are

    -









    Importance Sampling: Overview of what needs to be coded

    @@ -733,7 +522,6 @@

    Code exa from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import sys -from numba import jit,njit # Trial wave function for the 2-electron quantum dot in two dims @@ -788,9 +576,6 @@

    Code exa
    # The Monte Carlo sampling with the Metropolis algo
    -# jit decorator tells Numba to compile this function.
    -# The argument types will be inferred by Numba when function is called.
    -@jit()
     def MonteCarloSampling():
     
         NumberMCcycles= 100000
    @@ -925,9 +710,897 @@ 

    Code exa

    +









    +

    Importance sampling, program elements

    +
    + +

    +

    The general derivative formula of the Jastrow factor is (the subscript \( C \) stands for Correlation)

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k} +$$ + +

    However, +with our written in way which can be reused later as +

    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})= \exp{\left\{\sum_{i < j}f(r_{ij})\right\}}, +$$ + +

    the gradient needed for the quantum force and local energy is easy to compute. +The function \( f(r_{ij}) \) will depends on the system under study. In the equations below we will keep this general form. +

    +
    + + +









    +

    Importance sampling, program elements

    +
    + +

    +

    In the Metropolis/Hasting algorithm, the acceptance ratio determines the probability for a particle to be accepted at a new position. The ratio of the trial wave functions evaluated at the new and current positions is given by (\( OB \) for the onebody part)

    +$$ +R \equiv \frac{\Psi_{T}^{new}}{\Psi_{T}^{old}} = +\frac{\Psi_{OB}^{new}}{\Psi_{OB}^{old}}\frac{\Psi_{C}^{new}}{\Psi_{C}^{old}} +$$ + +

    Here \( \Psi_{OB} \) is our onebody part (Slater determinant or product of boson single-particle states) while \( \Psi_{C} \) is our correlation function, or Jastrow factor. +We need to optimize the \( \nabla \Psi_T / \Psi_T \) ratio and the second derivative as well, that is +the \( \mathbf{\nabla}^2 \Psi_T/\Psi_T \) ratio. The first is needed when we compute the so-called quantum force in importance sampling. +The second is needed when we compute the kinetic energy term of the local energy. +

    +$$ +\frac{\mathbf{\mathbf{\nabla}} \Psi}{\Psi} = \frac{\mathbf{\nabla} (\Psi_{OB} \, \Psi_{C})}{\Psi_{OB} \, \Psi_{C}} = \frac{ \Psi_C \mathbf{\nabla} \Psi_{OB} + \Psi_{OB} \mathbf{\nabla} \Psi_{C}}{\Psi_{OB} \Psi_{C}} = \frac{\mathbf{\nabla} \Psi_{OB}}{\Psi_{OB}} + \frac{\mathbf{\nabla} \Psi_C}{ \Psi_C} +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    The expectation value of the kinetic energy expressed in atomic units for electron \( i \) is

    +$$ + \langle \hat{K}_i \rangle = -\frac{1}{2}\frac{\langle\Psi|\mathbf{\nabla}_{i}^2|\Psi \rangle}{\langle\Psi|\Psi \rangle}, +$$ + +$$ +\hat{K}_i = -\frac{1}{2}\frac{\mathbf{\nabla}_{i}^{2} \Psi}{\Psi}. +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    The second derivative which enters the definition of the local energy is

    +$$ +\frac{\mathbf{\nabla}^2 \Psi}{\Psi}=\frac{\mathbf{\nabla}^2 \Psi_{OB}}{\Psi_{OB}} + \frac{\mathbf{\nabla}^2 \Psi_C}{ \Psi_C} + 2 \frac{\mathbf{\nabla} \Psi_{OB}}{\Psi_{OB}}\cdot\frac{\mathbf{\nabla} \Psi_C}{ \Psi_C} +$$ + +

    We discuss here how to calculate these quantities in an optimal way,

    +
    + +









    +

    Importance sampling

    +
    + +

    +

    We have defined the correlated function as

    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})=\prod_{i < j}^Ng(r_{ij})= \prod_{i=1}^N\prod_{j=i+1}^Ng(r_{ij}), +$$ + +

    with +\( r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2+(z_i-z_j)^2} \) in three dimensions or +\( r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2} \) if we work with two-dimensional systems. +

    + +

    In our particular case we have

    +$$ +\Psi_C=\prod_{i < j}g(r_{ij})=\exp{\left\{\sum_{i < j}f(r_{ij})\right\}}. +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    The total number of different relative distances \( r_{ij} \) is \( N(N-1)/2 \). In a matrix storage format, the relative distances form a strictly upper triangular matrix

    +$$ + \mathbf{r} \equiv \begin{pmatrix} + 0 & r_{1,2} & r_{1,3} & \cdots & r_{1,N} \\ + \vdots & 0 & r_{2,3} & \cdots & r_{2,N} \\ + \vdots & \vdots & 0 & \ddots & \vdots \\ + \vdots & \vdots & \vdots & \ddots & r_{N-1,N} \\ + 0 & 0 & 0 & \cdots & 0 + \end{pmatrix}. +$$ + +

    This applies to \( \mathbf{g} = \mathbf{g}(r_{ij}) \) as well.

    + +

    In our algorithm we will move one particle at the time, say the \( kth \)-particle. This sampling will be seen to be particularly efficient when we are going to compute a Slater determinant.

    +
    + +









    +

    Importance sampling

    +
    + +

    +

    We have that the ratio between Jastrow factors \( R_C \) is given by

    +$$ +R_{C} = \frac{\Psi_{C}^\mathrm{new}}{\Psi_{C}^\mathrm{cur}} = +\prod_{i=1}^{k-1}\frac{g_{ik}^\mathrm{new}}{g_{ik}^\mathrm{cur}} +\prod_{i=k+1}^{N}\frac{ g_{ki}^\mathrm{new}} {g_{ki}^\mathrm{cur}}. +$$ + +

    For the Pade-Jastrow form

    +$$ + R_{C} = \frac{\Psi_{C}^\mathrm{new}}{\Psi_{C}^\mathrm{cur}} = +\frac{\exp{U_{new}}}{\exp{U_{cur}}} = \exp{\Delta U}, +$$ + +

    where

    +$$ +\Delta U = +\sum_{i=1}^{k-1}\big(f_{ik}^\mathrm{new}-f_{ik}^\mathrm{cur}\big) ++ +\sum_{i=k+1}^{N}\big(f_{ki}^\mathrm{new}-f_{ki}^\mathrm{cur}\big) +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    One needs to develop a special algorithm +that runs only through the elements of the upper triangular +matrix \( \mathbf{g} \) and have \( k \) as an index. +

    + +

    The expression to be derived in the following is of interest when computing the quantum force and the kinetic energy. It has the form

    +$$ +\frac{\mathbf{\nabla}_i\Psi_C}{\Psi_C} = \frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_i}, +$$ + +

    for all dimensions and with \( i \) running over all particles.

    +
    + +









    +

    Importance sampling

    +
    + +

    +

    For the first derivative only \( N-1 \) terms survive the ratio because the \( g \)-terms that are not differentiated cancel with their corresponding ones in the denominator. Then,

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\partial g_{ki}}{\partial x_k}. +$$ + +

    An equivalent equation is obtained for the exponential form after replacing \( g_{ij} \) by \( \exp(f_{ij}) \), yielding:

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k}, +$$ + +

    with both expressions scaling as \( \mathcal{O}(N) \).

    +
    + + +









    +

    Importance sampling

    +
    + +

    + +

    Using the identity

    +$$ +\frac{\partial}{\partial x_i}g_{ij} = -\frac{\partial}{\partial x_j}g_{ij}, +$$ + +

    we get expressions where all the derivatives acting on the particle are represented by the second index of \( g \):

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}}\frac{\partial g_{ik}}{\partial x_k} +-\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\partial g_{ki}}{\partial x_i}, +$$ + +

    and for the exponential case:

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} +-\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i}. +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    For correlation forms depending only on the scalar distances \( r_{ij} \) we can use the chain rule. Noting that

    +$$ +\frac{\partial g_{ij}}{\partial x_j} = \frac{\partial g_{ij}}{\partial r_{ij}} \frac{\partial r_{ij}}{\partial x_j} = \frac{x_j - x_i}{r_{ij}} \frac{\partial g_{ij}}{\partial r_{ij}}, +$$ + +

    we arrive at

    +$$ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{1}{g_{ik}} \frac{\mathbf{r_{ik}}}{r_{ik}} \frac{\partial g_{ik}}{\partial r_{ik}} +-\sum_{i=k+1}^{N}\frac{1}{g_{ki}}\frac{\mathbf{r_{ki}}}{r_{ki}}\frac{\partial g_{ki}}{\partial r_{ki}}. +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    Note that for the Pade-Jastrow form we can set \( g_{ij} \equiv g(r_{ij}) = e^{f(r_{ij})} = e^{f_{ij}} \) and

    +$$ +\frac{\partial g_{ij}}{\partial r_{ij}} = g_{ij} \frac{\partial f_{ij}}{\partial r_{ij}}. +$$ + +

    Therefore,

    +$$ +\frac{1}{\Psi_{C}}\frac{\partial \Psi_{C}}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\mathbf{r_{ik}}}{r_{ik}}\frac{\partial f_{ik}}{\partial r_{ik}} +-\sum_{i=k+1}^{N}\frac{\mathbf{r_{ki}}}{r_{ki}}\frac{\partial f_{ki}}{\partial r_{ki}}, +$$ + +

    where

    +$$ + \mathbf{r}_{ij} = |\mathbf{r}_j - \mathbf{r}_i| = (x_j - x_i)\mathbf{e}_1 + (y_j - y_i)\mathbf{e}_2 + (z_j - z_i)\mathbf{e}_3 +$$ + +

    is the relative distance.

    +
    + + +









    +

    Importance sampling

    +
    + +

    +

    The second derivative of the Jastrow factor divided by the Jastrow factor (the way it enters the kinetic energy) is

    +$$ +\left[\frac{\mathbf{\nabla}^2 \Psi_C}{\Psi_C}\right]_x =\ +2\sum_{k=1}^{N} +\sum_{i=1}^{k-1}\frac{\partial^2 g_{ik}}{\partial x_k^2}\ +\ +\sum_{k=1}^N +\left( +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} - +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i} +\right)^2 +$$ +
    + + +









    +

    Importance sampling

    +
    + +

    +

    But we have a simple form for the function, namely

    +$$ +\Psi_{C}=\prod_{i < j}\exp{f(r_{ij})}, +$$ + +

    and it is easy to see that for particle \( k \) +we have +

    +$$ + \frac{\mathbf{\nabla}^2_k \Psi_C}{\Psi_C }= +\sum_{ij\ne k}\frac{(\mathbf{r}_k-\mathbf{r}_i)(\mathbf{r}_k-\mathbf{r}_j)}{r_{ki}r_{kj}}f'(r_{ki})f'(r_{kj})+ +\sum_{j\ne k}\left( f''(r_{kj})+\frac{2}{r_{kj}}f'(r_{kj})\right) +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    A stochastic process is simply a function of two variables, one is the time, +the other is a stochastic variable \( X \), defined by specifying +

    +
      +
    • the set \( \left\{x\right\} \) of possible values for \( X \);
    • +
    • the probability distribution, \( w_X(x) \), over this set, or briefly \( w(x) \)
    • +
    +

    The set of values \( \left\{x\right\} \) for \( X \) +may be discrete, or continuous. If the set of +values is continuous, then \( w_X (x) \) is a probability density so that +\( w_X (x)dx \) +is the probability that one finds the stochastic variable \( X \) to have values +in the range \( [x, x + dx] \) . +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    An arbitrary number of other stochastic variables may be derived from +\( X \). For example, any \( Y \) given by a mapping of \( X \), is also a stochastic +variable. The mapping may also be time-dependent, that is, the mapping +depends on an additional variable \( t \) +

    +$$ + Y_X (t) = f (X, t) . +$$ + +

    The quantity \( Y_X (t) \) is called a random function, or, since \( t \) often is time, +a stochastic process. A stochastic process is a function of two variables, +one is the time, the other is a stochastic variable \( X \). Let \( x \) be one of the +possible values of \( X \) then +

    +$$ + y(t) = f (x, t), +$$ + +

    is a function of \( t \), called a sample function or realization of the process. +In physics one considers the stochastic process to be an ensemble of such +sample functions. +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    For many physical systems initial distributions of a stochastic +variable \( y \) tend to equilibrium distributions: \( w(y, t)\rightarrow w_0(y) \) +as \( t\rightarrow\infty \). In +equilibrium detailed balance constrains the transition rates +

    +$$ + W(y\rightarrow y')w(y ) = W(y'\rightarrow y)w_0 (y), +$$ + +

    where \( W(y'\rightarrow y) \) +is the probability, per unit time, that the system changes +from a state \( |y\rangle \) , characterized by the value \( y \) +for the stochastic variable \( Y \) , to a state \( |y'\rangle \). +

    + +

    Note that for a system in equilibrium the transition rate +\( W(y'\rightarrow y) \) and +the reverse \( W(y\rightarrow y') \) may be very different. +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    Consider, for instance, a simple +system that has only two energy levels \( \epsilon_0 = 0 \) and +\( \epsilon_1 = \Delta E \). +

    + +

    For a system governed by the Boltzmann distribution we find (the partition function has been taken out)

    +$$ + W(0\rightarrow 1)\exp{-(\epsilon_0/kT)} = W(1\rightarrow 0)\exp{-(\epsilon_1/kT)} +$$ + +

    We get then

    +$$ + \frac{W(1\rightarrow 0)}{W(0 \rightarrow 1)}=\exp{-(\Delta E/kT)}, +$$ + +

    which goes to zero when \( T \) tends to zero.

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    If we assume a discrete set of events, +our initial probability +distribution function can be given by +

    +$$ + w_i(0) = \delta_{i,0}, +$$ + +

    and its time-development after a given time step \( \Delta t=\epsilon \) is

    +$$ + w_i(t) = \sum_{j}W(j\rightarrow i)w_j(t=0). +$$ + +

    The continuous analog to \( w_i(0) \) is

    +$$ + w(\mathbf{x})\rightarrow \delta(\mathbf{x}), +$$ + +

    where we now have generalized the one-dimensional position \( x \) to a generic-dimensional +vector \( \mathbf{x} \). The Kroenecker \( \delta \) function is replaced by the \( \delta \) distribution +function \( \delta(\mathbf{x}) \) at \( t=0 \). +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    The transition from a state \( j \) to a state \( i \) is now replaced by a transition +to a state with position \( \mathbf{y} \) from a state with position \( \mathbf{x} \). +The discrete sum of transition probabilities can then be replaced by an integral +and we obtain the new distribution at a time \( t+\Delta t \) as +

    +$$ + w(\mathbf{y},t+\Delta t)= \int W(\mathbf{y},t+\Delta t| \mathbf{x},t)w(\mathbf{x},t)d\mathbf{x}, +$$ + +

    and after \( m \) time steps we have

    +$$ + w(\mathbf{y},t+m\Delta t)= \int W(\mathbf{y},t+m\Delta t| \mathbf{x},t)w(\mathbf{x},t)d\mathbf{x}. +$$ + +

    When equilibrium is reached we have

    +$$ + w(\mathbf{y})= \int W(\mathbf{y}|\mathbf{x}, t)w(\mathbf{x})d\mathbf{x}, +$$ + +

    that is no time-dependence. Note our change of notation for \( W \)

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can solve the equation for \( w(\mathbf{y},t) \) by making a Fourier transform to +momentum space. +The PDF \( w(\mathbf{x},t) \) is related to its Fourier transform +\( \tilde{w}(\mathbf{k},t) \) through +

    +$$ + w(\mathbf{x},t) = \int_{-\infty}^{\infty}d\mathbf{k} \exp{(i\mathbf{kx})}\tilde{w}(\mathbf{k},t), +$$ + +

    and using the definition of the +\( \delta \)-function +

    +$$ + \delta(\mathbf{x}) = \frac{1}{2\pi} \int_{-\infty}^{\infty}d\mathbf{k} \exp{(i\mathbf{kx})}, +$$ + +

    we see that

    +$$ + \tilde{w}(\mathbf{k},0)=1/2\pi. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can then use the Fourier-transformed diffusion equation

    +$$ + \frac{\partial \tilde{w}(\mathbf{k},t)}{\partial t} = -D\mathbf{k}^2\tilde{w}(\mathbf{k},t), +$$ + +

    with the obvious solution

    +$$ + \tilde{w}(\mathbf{k},t)=\tilde{w}(\mathbf{k},0)\exp{\left[-(D\mathbf{k}^2t)\right)}= + \frac{1}{2\pi}\exp{\left[-(D\mathbf{k}^2t)\right]}. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    With the Fourier transform we obtain

    +$$ + w(\mathbf{x},t)=\int_{-\infty}^{\infty}d\mathbf{k} \exp{\left[i\mathbf{kx}\right]}\frac{1}{2\pi}\exp{\left[-(D\mathbf{k}^2t)\right]}= + \frac{1}{\sqrt{4\pi Dt}}\exp{\left[-(\mathbf{x}^2/4Dt)\right]}, +$$ + +

    with the normalization condition

    +$$ + \int_{-\infty}^{\infty}w(\mathbf{x},t)d\mathbf{x}=1. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    The solution represents the probability of finding +our random walker at position \( \mathbf{x} \) at time \( t \) if the initial distribution +was placed at \( \mathbf{x}=0 \) at \( t=0 \). +

    + +

    There is another interesting feature worth observing. The discrete transition probability \( W \) +itself is given by a binomial distribution. +The results from the central limit theorem state that +transition probability in the limit \( n\rightarrow \infty \) converges to the normal +distribution. It is then possible to show that +

    +$$ + W(il-jl,n\epsilon)\rightarrow W(\mathbf{y},t+\Delta t|\mathbf{x},t)= + \frac{1}{\sqrt{4\pi D\Delta t}}\exp{\left[-((\mathbf{y}-\mathbf{x})^2/4D\Delta t)\right]}, +$$ + +

    and that it satisfies the normalization condition and is itself a solution +to the diffusion equation. +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    Let us now assume that we have three PDFs for times \( t_0 < t' < t \), that is +\( w(\mathbf{x}_0,t_0) \), \( w(\mathbf{x}',t') \) and \( w(\mathbf{x},t) \). +We have then +

    +$$ + w(\mathbf{x},t)= \int_{-\infty}^{\infty} W(\mathbf{x}.t|\mathbf{x}'.t')w(\mathbf{x}',t')d\mathbf{x}', +$$ + +

    and

    +$$ + w(\mathbf{x},t)= \int_{-\infty}^{\infty} W(\mathbf{x}.t|\mathbf{x}_0.t_0)w(\mathbf{x}_0,t_0)d\mathbf{x}_0, +$$ + +

    and

    +$$ + w(\mathbf{x}',t')= \int_{-\infty}^{\infty} W(\mathbf{x}'.t'|\mathbf{x}_0,t_0)w(\mathbf{x}_0,t_0)d\mathbf{x}_0. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can combine these equations and arrive at the famous Einstein-Smoluchenski-Kolmogorov-Chapman (ESKC) relation

    +$$ + W(\mathbf{x}t|\mathbf{x}_0t_0) = \int_{-\infty}^{\infty} W(\mathbf{x},t|\mathbf{x}',t')W(\mathbf{x}',t'|\mathbf{x}_0,t_0)d\mathbf{x}'. +$$ + +

    We can replace the spatial dependence with a dependence upon say the velocity +(or momentum), that is we have +

    +$$ + W(\mathbf{v},t|\mathbf{v}_0,t_0) = \int_{-\infty}^{\infty} W(\mathbf{v},t|\mathbf{v}',t')W(\mathbf{v}',t'|\mathbf{v}_0,t_0)d\mathbf{x}'. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We will now derive the Fokker-Planck equation. +We start from the ESKC equation +

    +$$ + W(\mathbf{x},t|\mathbf{x}_0,t_0) = \int_{-\infty}^{\infty} W(\mathbf{x},t|\mathbf{x}',t')W(\mathbf{x}',t'|\mathbf{x}_0,t_0)d\mathbf{x}'. +$$ + +

    Define \( s=t'-t_0 \), \( \tau=t-t' \) and \( t-t_0=s+\tau \). We have then

    +$$ + W(\mathbf{x},s+\tau|\mathbf{x}_0) = \int_{-\infty}^{\infty} W(\mathbf{x},\tau|\mathbf{x}')W(\mathbf{x}',s|\mathbf{x}_0)d\mathbf{x}'. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    Assume now that \( \tau \) is very small so that we can make an expansion in terms of a small step \( xi \), with \( \mathbf{x}'=\mathbf{x}-\xi \), that is

    +$$ + W(\mathbf{x},s|\mathbf{x}_0)+\frac{\partial W}{\partial s}\tau +O(\tau^2) = \int_{-\infty}^{\infty} W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0)d\mathbf{x}'. +$$ + +

    We assume that \( W(\mathbf{x},\tau|\mathbf{x}-\xi) \) takes non-negligible values only when \( \xi \) is small. This is just another way of stating the Master equation!!

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We say thus that \( \mathbf{x} \) changes only by a small amount in the time interval \( \tau \). +This means that we can make a Taylor expansion in terms of \( \xi \), that is we +expand +

    +$$ +W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0) = +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n}\left[W(\mathbf{x}+\xi,\tau|\mathbf{x})W(\mathbf{x},s|\mathbf{x}_0) +\right]. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can then rewrite the ESKC equation as

    +$$ +\frac{\partial W}{\partial s}\tau=-W(\mathbf{x},s|\mathbf{x}_0)+ +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi\right]. +$$ + +

    We have neglected higher powers of \( \tau \) and have used that for \( n=0 \) +we get simply \( W(\mathbf{x},s|\mathbf{x}_0) \) due to normalization. +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We say thus that \( \mathbf{x} \) changes only by a small amount in the time interval \( \tau \). +This means that we can make a Taylor expansion in terms of \( \xi \), that is we +expand +

    +$$ +W(\mathbf{x},\tau|\mathbf{x}-\xi)W(\mathbf{x}-\xi,s|\mathbf{x}_0) = +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n}\left[W(\mathbf{x}+\xi,\tau|\mathbf{x})W(\mathbf{x},s|\mathbf{x}_0) +\right]. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We can then rewrite the ESKC equation as

    +$$ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}\tau=-W(\mathbf{x},s|\mathbf{x}_0)+ +\sum_{n=0}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi\right]. +$$ + +

    We have neglected higher powers of \( \tau \) and have used that for \( n=0 \) +we get simply \( W(\mathbf{x},s|\mathbf{x}_0) \) due to normalization. +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    We simplify the above by introducing the moments

    +$$ +M_n=\frac{1}{\tau}\int_{-\infty}^{\infty} \xi^nW(\mathbf{x}+\xi,\tau|\mathbf{x})d\xi= +\frac{\langle [\Delta x(\tau)]^n\rangle}{\tau}, +$$ + +

    resulting in

    +$$ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}= +\sum_{n=1}^{\infty}\frac{(-\xi)^n}{n!}\frac{\partial^n}{\partial x^n} +\left[W(\mathbf{x},s|\mathbf{x}_0)M_n\right]. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    When \( \tau \rightarrow 0 \) we assume that \( \langle [\Delta x(\tau)]^n\rangle \rightarrow 0 \) more rapidly than \( \tau \) itself if \( n > 2 \). +When \( \tau \) is much larger than the standard correlation time of +system then \( M_n \) for \( n > 2 \) can normally be neglected. +This means that fluctuations become negligible at large time scales. +

    + +

    If we neglect such terms we can rewrite the ESKC equation as

    +$$ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}= +-\frac{\partial M_1W(\mathbf{x},s|\mathbf{x}_0)}{\partial x}+ +\frac{1}{2}\frac{\partial^2 M_2W(\mathbf{x},s|\mathbf{x}_0)}{\partial x^2}. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    + +

    +

    In a more compact form we have

    +$$ +\frac{\partial W}{\partial s}= +-\frac{\partial M_1W}{\partial x}+ +\frac{1}{2}\frac{\partial^2 M_2W}{\partial x^2}, +$$ + +

    which is the Fokker-Planck equation! It is trivial to replace +position with velocity (momentum). +

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    Consider a particle suspended in a liquid. On its path through the liquid it will continuously collide with the liquid molecules. Because on average the particle will collide more often on the front side than on the back side, it will experience a systematic force proportional with its velocity, and directed opposite to its velocity. Besides this systematic force the particle will experience a stochastic force \( \mathbf{F}(t) \). +The equations of motion are +

    +
      +
    • \( \frac{d\mathbf{r}}{dt}=\mathbf{v} \) and
    • +
    • \( \frac{d\mathbf{v}}{dt}=-\xi \mathbf{v}+\mathbf{F} \).
    • +
    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    From hydrodynamics we know that the friction constant \( \xi \) is given by

    +$$ +\xi =6\pi \eta a/m +$$ + +

    where \( \eta \) is the viscosity of the solvent and a is the radius of the particle .

    + +

    Solving the second equation in the previous slide we get

    +$$ +\mathbf{v}(t)=\mathbf{v}_{0}e^{-\xi t}+\int_{0}^{t}d\tau e^{-\xi (t-\tau )}\mathbf{F }(\tau ). +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    If we want to get some useful information out of this, we have to average over all possible realizations of +\( \mathbf{F}(t) \), with the initial velocity as a condition. A useful quantity for example is +

    +$$ +\langle \mathbf{v}(t)\cdot \mathbf{v}(t)\rangle_{\mathbf{v}_{0}}=v_{0}^{-\xi 2t} ++2\int_{0}^{t}d\tau e^{-\xi (2t-\tau)}\mathbf{v}_{0}\cdot \langle \mathbf{F}(\tau )\rangle_{\mathbf{v}_{0}} +$$ + +$$ + +\int_{0}^{t}d\tau ^{\prime }\int_{0}^{t}d\tau e^{-\xi (2t-\tau -\tau ^{\prime })} +\langle \mathbf{F}(\tau )\cdot \mathbf{F}(\tau ^{\prime })\rangle_{ \mathbf{v}_{0}}. +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    In order to continue we have to make some assumptions about the conditional averages of the stochastic forces. +In view of the chaotic character of the stochastic forces the following +assumptions seem to be appropriate +

    +$$ +\langle \mathbf{F}(t)\rangle=0, +$$ + +

    and

    +$$ +\langle \mathbf{F}(t)\cdot \mathbf{F}(t^{\prime })\rangle_{\mathbf{v}_{0}}= C_{\mathbf{v}_{0}}\delta (t-t^{\prime }). +$$ + +

    We omit the subscript \( \mathbf{v}_{0} \), when the quantity of interest turns out to be independent of \( \mathbf{v}_{0} \). Using the last three equations we get

    +$$ +\langle \mathbf{v}(t)\cdot \mathbf{v}(t)\rangle_{\mathbf{v}_{0}}=v_{0}^{2}e^{-2\xi t}+\frac{C_{\mathbf{v}_{0}}}{2\xi }(1-e^{-2\xi t}). +$$ + +

    For large t this should be equal to 3kT/m, from which it follows that

    +$$ +\langle \mathbf{F}(t)\cdot \mathbf{F}(t^{\prime })\rangle =6\frac{kT}{m}\xi \delta (t-t^{\prime }). +$$ + +

    This result is called the fluctuation-dissipation theorem .

    +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    Integrating

    +$$ +\mathbf{v}(t)=\mathbf{v}_{0}e^{-\xi t}+\int_{0}^{t}d\tau e^{-\xi (t-\tau )}\mathbf{F }(\tau ), +$$ + +

    we get

    +$$ +\mathbf{r}(t)=\mathbf{r}_{0}+\mathbf{v}_{0}\frac{1}{\xi }(1-e^{-\xi t})+ +\int_0^td\tau \int_0^{\tau}\tau ^{\prime } e^{-\xi (\tau -\tau ^{\prime })}\mathbf{F}(\tau ^{\prime }), +$$ + +

    from which we calculate the mean square displacement

    +$$ +\langle ( \mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle _{\mathbf{v}_{0}}=\frac{v_0^2}{\xi}(1-e^{-\xi t})^{2}+\frac{3kT}{m\xi ^{2}}(2\xi t-3+4e^{-\xi t}-e^{-2\xi t}). +$$ +
    + + +









    +

    Importance sampling, Fokker-Planck and Langevin equations

    +
    +Langevin equation +

    +

    For very large \( t \) this becomes

    +$$ +\langle (\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle =\frac{6kT}{m\xi }t +$$ + +

    from which we get the Einstein relation

    +$$ +D= \frac{kT}{m\xi } +$$ + +

    where we have used \( \langle (\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle =6Dt \).

    +
    + +
    - © 1999-2023, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license + © 1999-2024, Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no. Released under CC Attribution-NonCommercial 4.0 license
    diff --git a/doc/pub/week3/ipynb/ipynb-week3-src.tar.gz b/doc/pub/week3/ipynb/ipynb-week3-src.tar.gz index 5aaf7a07..5f300302 100644 Binary files a/doc/pub/week3/ipynb/ipynb-week3-src.tar.gz and b/doc/pub/week3/ipynb/ipynb-week3-src.tar.gz differ diff --git a/doc/pub/week3/ipynb/week3.ipynb b/doc/pub/week3/ipynb/week3.ipynb index 397d0b2b..fe9a5eaf 100644 --- a/doc/pub/week3/ipynb/week3.ipynb +++ b/doc/pub/week3/ipynb/week3.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "3800ed9f", + "id": "c8029688", "metadata": { "editable": true }, @@ -14,7 +14,7 @@ }, { "cell_type": "markdown", - "id": "ff73ddc4", + "id": "7f254918", "metadata": { "editable": true }, @@ -22,20 +22,20 @@ "# Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations\n", "**Morten Hjorth-Jensen Email morten.hjorth-jensen@fys.uio.no**, Department of Physics and Center fo Computing in Science Education, University of Oslo, Oslo, Norway and Department of Physics and Astronomy and Facility for Rare Isotope Beams, Michigan State University, East Lansing, Michigan, USA\n", "\n", - "Date: **February 6-10**" + "Date: **February 2**" ] }, { "cell_type": "markdown", - "id": "cb98b7bc", + "id": "804e0239", "metadata": { "editable": true }, "source": [ - "## Overview of week 5\n", + "## Overview of week 5, January 29-February 2\n", "**Topics.**\n", "\n", - "* Markov Chain Monte Carlo\n", + "* Markov Chain Monte Carlo and repetition from last week\n", "\n", "* Metropolis-Hastings sampling and Importance Sampling\n", "\n", @@ -45,1091 +45,2560 @@ "\n", "* [Video of lecture tba](https://youtu.be/)\n", "\n", - "* [Handwritten notes tba](https://github.com/CompPhysics/ComputationalPhysics2/blob/gh-pages/doc/HandWrittenNotes/2023/NotesFebruary2.pdf)\n", - "\n", - "* See also [Lectures from FYS3150/4150 on the Metropolis Algorithm](http://compphysics.github.io/ComputationalPhysics/doc/pub/rw/html/rw-bs.html)" - ] - }, - { - "cell_type": "markdown", - "id": "bd51a64d", - "metadata": { - "editable": true - }, - "source": [ - "## Basics of the Metropolis Algorithm\n", - "\n", - "The Metropolis et al.\n", - "algorithm was invented by Metropolis et. a\n", - "and is often simply called the Metropolis algorithm.\n", - "It is a method to sample a normalized probability\n", - "distribution by a stochastic process. We define ${\\cal P}_i^{(n)}$ to\n", - "be the probability for finding the system in the state $i$ at step $n$.\n", - "The algorithm is then" + "* [Handwritten notes tba](https://github.com/CompPhysics/ComputationalPhysics2/blob/gh-pages/doc/HandWrittenNotes/2023/NotesFebruary2.pdf)" ] }, { "cell_type": "markdown", - "id": "70f582cd", + "id": "87ea9221", "metadata": { "editable": true }, "source": [ - "## The basic of the Metropolis Algorithm\n", - "\n", - "* Sample a possible new state $j$ with some probability $T_{i\\rightarrow j}$.\n", - "\n", - "* Accept the new state $j$ with probability $A_{i \\rightarrow j}$ and use it as the next sample.\n", - "\n", - "* With probability $1-A_{i\\rightarrow j}$ the move is rejected and the original state $i$ is used again as a sample.\n", + "## Importance Sampling: Overview of what needs to be coded\n", "\n", - "We wish to derive the required properties of $T$ and $A$ such that\n", - "${\\cal P}_i^{(n\\rightarrow \\infty)} \\rightarrow p_i$ so that starting\n", - "from any distribution, the method converges to the correct distribution.\n", - "Note that the description here is for a discrete probability distribution.\n", - "Replacing probabilities $p_i$ with expressions like $p(x_i)dx_i$ will\n", - "take all of these over to the corresponding continuum expressions." + "For a diffusion process characterized by a time-dependent probability density $P(x,t)$ in one dimension the Fokker-Planck\n", + "equation reads (for one particle /walker)" ] }, { "cell_type": "markdown", - "id": "d43b485e", + "id": "f2f86752", "metadata": { "editable": true }, "source": [ - "## More on the Metropolis\n", - "\n", - "The dynamical equation for ${\\cal P}_i^{(n)}$ can be written directly from\n", - "the description above. The probability of being in the state $i$ at step $n$\n", - "is given by the probability of being in any state $j$ at the previous step,\n", - "and making an accepted transition to $i$ added to the probability of\n", - "being in the state $i$, making a transition to any state $j$ and\n", - "rejecting the move:" + "$$\n", + "\\frac{\\partial P}{\\partial t} = D\\frac{\\partial }{\\partial x}\\left(\\frac{\\partial }{\\partial x} -F\\right)P(x,t),\n", + "$$" ] }, { "cell_type": "markdown", - "id": "fdfc9b62", + "id": "91eb7aff", "metadata": { "editable": true }, "source": [ - "\n", - "
    \n", - "\n", - "$$\n", - "\\begin{equation}\n", - "\\label{eq:eq1} \\tag{1}\n", - "{\\cal P}^{(n)}_i = \\sum_j \\left [\n", - "{\\cal P}^{(n-1)}_jT_{j\\rightarrow i} A_{j\\rightarrow i} \n", - "+{\\cal P}^{(n-1)}_iT_{i\\rightarrow j}\\left ( 1- A_{i\\rightarrow j} \\right)\n", - "\\right ] \\,.\n", - "\\end{equation}\n", - "$$" + "where $F$ is a drift term and $D$ is the diffusion coefficient." ] }, { "cell_type": "markdown", - "id": "f655ee4f", + "id": "bc0932dc", "metadata": { "editable": true }, "source": [ - "## Metropolis algorithm, setting it up\n", - "Since the probability of making some transition must be 1,\n", - "$\\sum_j T_{i\\rightarrow j} = 1$, and Eq. ([1](#eq:eq1)) becomes" + "## Importance sampling\n", + "The new positions in coordinate space are given as the solutions of the Langevin equation using Euler's method, namely,\n", + "we go from the Langevin equation" ] }, { "cell_type": "markdown", - "id": "188ede6a", + "id": "d134cf8f", "metadata": { "editable": true }, "source": [ - "\n", - "
    \n", - "\n", "$$\n", - "\\begin{equation}\n", - "{\\cal P}^{(n)}_i = {\\cal P}^{(n-1)}_i +\n", - " \\sum_j \\left [\n", - "{\\cal P}^{(n-1)}_jT_{j\\rightarrow i} A_{j\\rightarrow i} \n", - "-{\\cal P}^{(n-1)}_iT_{i\\rightarrow j}A_{i\\rightarrow j}\n", - "\\right ] \\,.\n", - "\\label{_auto1} \\tag{2}\n", - "\\end{equation}\n", + "\\frac{\\partial x(t)}{\\partial t} = DF(x(t)) +\\eta,\n", "$$" ] }, { "cell_type": "markdown", - "id": "579cb342", + "id": "af9dd710", "metadata": { "editable": true }, "source": [ - "## Metropolis continues\n", - "\n", - "For large $n$ we require that ${\\cal P}^{(n\\rightarrow \\infty)}_i = p_i$,\n", - "the desired probability distribution. Taking this limit, gives the\n", - "balance requirement" + "with $\\eta$ a random variable,\n", + "yielding a new position" ] }, { "cell_type": "markdown", - "id": "e36ed395", + "id": "b770dc2d", "metadata": { "editable": true }, "source": [ - "\n", - "
    \n", - "\n", "$$\n", - "\\begin{equation}\n", - "\\sum_j \\left [p_jT_{j\\rightarrow i} A_{j\\rightarrow i}-p_iT_{i\\rightarrow j}A_{i\\rightarrow j}\n", - "\\right ] = 0,\n", - "\\label{_auto2} \\tag{3}\n", - "\\end{equation}\n", + "y = x+DF(x)\\Delta t +\\xi\\sqrt{\\Delta t},\n", "$$" ] }, { "cell_type": "markdown", - "id": "f3eee488", + "id": "14adf436", "metadata": { "editable": true }, "source": [ - "## Detailed Balance\n", - "\n", - "The balance requirement is very weak. Typically the much stronger detailed\n", - "balance requirement is enforced, that is rather than the sum being\n", - "set to zero, we set each term separately to zero and use this\n", - "to determine the acceptance probabilities. Rearranging, the result is" + "where $\\xi$ is gaussian random variable and $\\Delta t$ is a chosen time step. \n", + "The quantity $D$ is, in atomic units, equal to $1/2$ and comes from the factor $1/2$ in the kinetic energy operator. Note that $\\Delta t$ is to be viewed as a parameter. Values of $\\Delta t \\in [0.001,0.01]$ yield in general rather stable values of the ground state energy." ] }, { "cell_type": "markdown", - "id": "107196d5", + "id": "28ed8b44", "metadata": { "editable": true }, "source": [ - "\n", - "
    \n", - "\n", - "$$\n", - "\\begin{equation}\n", - "\\frac{ A_{j\\rightarrow i}}{A_{i\\rightarrow j}}\n", - "= \\frac{p_iT_{i\\rightarrow j}}{ p_jT_{j\\rightarrow i}} \\,.\n", - "\\label{_auto3} \\tag{4}\n", - "\\end{equation}\n", - "$$" + "## Importance sampling\n", + "The process of isotropic diffusion characterized by a time-dependent probability density $P(\\mathbf{x},t)$ obeys (as an approximation) the so-called Fokker-Planck equation" ] }, { "cell_type": "markdown", - "id": "7c167a02", + "id": "88eec3cd", "metadata": { "editable": true }, "source": [ - "## More on Detailed Balance\n", - "\n", - "The Metropolis choice is to maximize the $A$ values, that is" + "$$\n", + "\\frac{\\partial P}{\\partial t} = \\sum_i D\\frac{\\partial }{\\partial \\mathbf{x_i}}\\left(\\frac{\\partial }{\\partial \\mathbf{x_i}} -\\mathbf{F_i}\\right)P(\\mathbf{x},t),\n", + "$$" ] }, { "cell_type": "markdown", - "id": "362f5f13", + "id": "0b14e443", "metadata": { "editable": true }, "source": [ - "\n", - "
    \n", - "\n", - "$$\n", - "\\begin{equation}\n", - "A_{j \\rightarrow i} = \\min \\left ( 1,\n", - "\\frac{p_iT_{i\\rightarrow j}}{ p_jT_{j\\rightarrow i}}\\right ).\n", - "\\label{_auto4} \\tag{5}\n", - "\\end{equation}\n", - "$$" + "where $\\mathbf{F_i}$ is the $i^{th}$ component of the drift term (drift velocity) caused by an external potential, and $D$ is the diffusion coefficient. The convergence to a stationary probability density can be obtained by setting the left hand side to zero. The resulting equation will be satisfied if and only if all the terms of the sum are equal zero," ] }, { "cell_type": "markdown", - "id": "a7ad3974", + "id": "3c3a819e", "metadata": { "editable": true }, "source": [ - "Other choices are possible, but they all correspond to multilplying\n", - "$A_{i\\rightarrow j}$ and $A_{j\\rightarrow i}$ by the same constant\n", - "smaller than unity. The penalty function method uses just such\n", - "a factor to compensate for $p_i$ that are evaluated stochastically\n", - "and are therefore noisy.\n", - "\n", - "Having chosen the acceptance probabilities, we have guaranteed that\n", - "if the ${\\cal P}_i^{(n)}$ has equilibrated, that is if it is equal to $p_i$,\n", - "it will remain equilibrated. Next we need to find the circumstances for\n", - "convergence to equilibrium." + "$$\n", + "\\frac{\\partial^2 P}{\\partial {\\mathbf{x_i}^2}} = P\\frac{\\partial}{\\partial {\\mathbf{x_i}}}\\mathbf{F_i} + \\mathbf{F_i}\\frac{\\partial}{\\partial {\\mathbf{x_i}}}P.\n", + "$$" ] }, { "cell_type": "markdown", - "id": "5a785b1a", + "id": "38fa2f4a", "metadata": { "editable": true }, "source": [ - "## Dynamical Equation\n", - "\n", - "The dynamical equation can be written as" + "## Importance sampling\n", + "The drift vector should be of the form $\\mathbf{F} = g(\\mathbf{x}) \\frac{\\partial P}{\\partial \\mathbf{x}}$. Then," ] }, { "cell_type": "markdown", - "id": "a82de172", + "id": "b379cb3a", "metadata": { "editable": true }, "source": [ - "\n", - "
    \n", - "\n", "$$\n", - "\\begin{equation}\n", - "{\\cal P}^{(n)}_i = \\sum_j M_{ij}{\\cal P}^{(n-1)}_j\n", - "\\label{_auto5} \\tag{6}\n", - "\\end{equation}\n", + "\\frac{\\partial^2 P}{\\partial {\\mathbf{x_i}^2}} = P\\frac{\\partial g}{\\partial P}\\left( \\frac{\\partial P}{\\partial {\\mathbf{x}_i}} \\right)^2 + P g \\frac{\\partial ^2 P}{\\partial {\\mathbf{x}_i^2}} + g \\left( \\frac{\\partial P}{\\partial {\\mathbf{x}_i}} \\right)^2.\n", "$$" ] }, { "cell_type": "markdown", - "id": "18ff6e06", + "id": "187ca6e4", "metadata": { "editable": true }, "source": [ - "with the matrix $M$ given by" + "The condition of stationary density means that the left hand side equals zero. In other words, the terms containing first and second derivatives have to cancel each other. It is possible only if $g = \\frac{1}{P}$, which yields" ] }, { "cell_type": "markdown", - "id": "494f9f0a", + "id": "520404fa", "metadata": { "editable": true }, "source": [ - "\n", - "
    \n", - "\n", "$$\n", - "\\begin{equation}\n", - "M_{ij} = \\delta_{ij}\\left [ 1 -\\sum_k T_{i\\rightarrow k} A_{i \\rightarrow k}\n", - "\\right ] + T_{j\\rightarrow i} A_{j\\rightarrow i} \\,.\n", - "\\label{_auto6} \\tag{7}\n", - "\\end{equation}\n", + "\\mathbf{F} = 2\\frac{1}{\\Psi_T}\\nabla\\Psi_T,\n", "$$" ] }, { "cell_type": "markdown", - "id": "42a29bf6", + "id": "3bbb1eac", "metadata": { "editable": true }, "source": [ - "Summing over $i$ shows that $\\sum_i M_{ij} = 1$, and since\n", - "$\\sum_k T_{i\\rightarrow k} = 1$, and $A_{i \\rightarrow k} \\leq 1$, the\n", - "elements of the matrix satisfy $M_{ij} \\geq 0$. The matrix $M$ is therefore\n", - "a stochastic matrix." + "which is known as the so-called *quantum force*. This term is responsible for pushing the walker towards regions of configuration space where the trial wave function is large, increasing the efficiency of the simulation in contrast to the Metropolis algorithm where the walker has the same probability of moving in every direction." ] }, { "cell_type": "markdown", - "id": "9c6af387", + "id": "a1090b31", "metadata": { "editable": true }, "source": [ - "## Interpreting the Metropolis Algorithm\n", - "\n", - "The Metropolis method is simply the power method for computing the\n", - "right eigenvector of $M$ with the largest magnitude eigenvalue.\n", - "By construction, the correct probability distribution is a right eigenvector\n", - "with eigenvalue 1. Therefore, for the Metropolis method to converge\n", - "to this result, we must show that $M$ has only one eigenvalue with this\n", - "magnitude, and all other eigenvalues are smaller.\n", - "\n", - "Even a defective matrix has at least one left and right eigenvector for\n", - "each eigenvalue. An example of a defective matrix is" + "## Importance sampling\n", + "The Fokker-Planck equation yields a (the solution to the equation) transition probability given by the Green's function" ] }, { "cell_type": "markdown", - "id": "1de2c4b7", + "id": "930bc975", "metadata": { "editable": true }, "source": [ "$$\n", - "\\begin{bmatrix}\n", - "0 & 1\\\\\n", - "0 & 0 \\\\\n", - "\\end{bmatrix},\n", + "G(y,x,\\Delta t) = \\frac{1}{(4\\pi D\\Delta t)^{3N/2}} \\exp{\\left(-(y-x-D\\Delta t F(x))^2/4D\\Delta t\\right)}\n", "$$" ] }, { "cell_type": "markdown", - "id": "43d3eb0c", + "id": "817eb71e", "metadata": { "editable": true }, "source": [ - "with two zero eigenvalues, only one right eigenvector" + "which in turn means that our brute force Metropolis algorithm" ] }, { "cell_type": "markdown", - "id": "1640e2de", + "id": "40840ec1", "metadata": { "editable": true }, "source": [ "$$\n", - "\\begin{bmatrix}\n", - "1 \\\\\n", - "0\\\\\n", - "\\end{bmatrix}\n", + "A(y,x) = \\mathrm{min}(1,q(y,x))),\n", "$$" ] }, { "cell_type": "markdown", - "id": "3e67097e", - "metadata": { - "editable": true - }, - "source": [ - "and only one left eigenvector $(0\\ 1)$." - ] - }, - { - "cell_type": "markdown", - "id": "cf757478", + "id": "af6280a5", "metadata": { "editable": true }, "source": [ - "## Gershgorin bounds and Metropolis\n", - "\n", - "The Gershgorin bounds for the eigenvalues can be derived by multiplying on\n", - "the left with the eigenvector with the maximum and minimum eigenvalues," + "with $q(y,x) = |\\Psi_T(y)|^2/|\\Psi_T(x)|^2$ is now replaced by the [Metropolis-Hastings algorithm](http://scitation.aip.org/content/aip/journal/jcp/21/6/10.1063/1.1699114) as well as [Hasting's article](http://biomet.oxfordjournals.org/content/57/1/97.abstract)," ] }, { "cell_type": "markdown", - "id": "d23d767f", + "id": "d2406628", "metadata": { "editable": true }, "source": [ "$$\n", - "\\sum_i \\psi^{\\rm max}_i M_{ij} = \\lambda_{\\rm max} \\psi^{\\rm max}_j\n", - "\\nonumber\n", + "q(y,x) = \\frac{G(x,y,\\Delta t)|\\Psi_T(y)|^2}{G(y,x,\\Delta t)|\\Psi_T(x)|^2}\n", "$$" ] }, { "cell_type": "markdown", - "id": "c8731186", + "id": "dd581966", "metadata": { "editable": true }, "source": [ - "\n", - "
    \n", + "## Code example for the interacting case with importance sampling\n", "\n", - "$$\n", - "\\begin{equation} \n", - "\\sum_i \\psi^{\\rm min}_i M_{ij} = \\lambda_{\\rm min} \\psi^{\\rm min}_j\n", - "\\label{_auto7} \\tag{8}\n", - "\\end{equation}\n", - "$$" + "We are now ready to implement importance sampling. This is done here for the two-electron case with the Coulomb interaction, as in the previous example. We have two variational parameters $\\alpha$ and $\\beta$. After the set up of files" ] }, { - "cell_type": "markdown", - "id": "caf26130", + "cell_type": "code", + "execution_count": 1, + "id": "32f827ec", "metadata": { + "collapsed": false, "editable": true }, + "outputs": [], "source": [ - "## Normalizing the Eigenvectors\n", + "# Common imports\n", + "import os\n", "\n", - "Next we choose the normalization of these eigenvectors so that the\n", - "largest element (or one of the equally largest elements)\n", - "has value 1. Let's call this element $k$, and\n", - "we can therefore bound the magnitude of the other elements to be less\n", - "than or equal to 1.\n", - "This leads to the inequalities, using the property that $M_{ij}\\geq 0$," - ] - }, - { - "cell_type": "markdown", - "id": "51fc4c72", - "metadata": { - "editable": true - }, - "source": [ - "$$\n", - "\\begin{eqnarray}\n", - "\\sum_i M_{ik} \\leq \\lambda_{\\rm max}\n", - "\\nonumber\\\\\n", - "M_{kk}-\\sum_{i \\neq k} M_{ik} \\geq \\lambda_{\\rm min}\n", - "\\end{eqnarray}\n", - "$$" + "# Where to save the figures and data files\n", + "PROJECT_ROOT_DIR = \"Results\"\n", + "FIGURE_ID = \"Results/FigureFiles\"\n", + "DATA_ID = \"Results/VMCQdotImportance\"\n", + "\n", + "if not os.path.exists(PROJECT_ROOT_DIR):\n", + " os.mkdir(PROJECT_ROOT_DIR)\n", + "\n", + "if not os.path.exists(FIGURE_ID):\n", + " os.makedirs(FIGURE_ID)\n", + "\n", + "if not os.path.exists(DATA_ID):\n", + " os.makedirs(DATA_ID)\n", + "\n", + "def image_path(fig_id):\n", + " return os.path.join(FIGURE_ID, fig_id)\n", + "\n", + "def data_path(dat_id):\n", + " return os.path.join(DATA_ID, dat_id)\n", + "\n", + "def save_fig(fig_id):\n", + " plt.savefig(image_path(fig_id) + \".png\", format='png')\n", + "\n", + "outfile = open(data_path(\"VMCQdotImportance.dat\"),'w')" ] }, { "cell_type": "markdown", - "id": "ed34c5a9", + "id": "3d268c3a", "metadata": { "editable": true }, "source": [ - "where the equality from the maximum\n", - "will occur only if the eigenvector takes the value 1 for all values of\n", - "$i$ where $M_{ik} \\neq 0$, and the equality for the minimum will\n", - "occur only if the eigenvector takes the value -1 for all values of $i\\neq k$\n", - "where $M_{ik} \\neq 0$." + "we move on to the set up of the trial wave function, the analytical expression for the local energy and the analytical expression for the quantum force." ] }, { - "cell_type": "markdown", - "id": "f7a42358", + "cell_type": "code", + "execution_count": 2, + "id": "de81309b", "metadata": { + "collapsed": false, "editable": true }, + "outputs": [], "source": [ - "## More Metropolis analysis\n", + "%matplotlib inline\n", + "\n", + "# 2-electron VMC code for 2dim quantum dot with importance sampling\n", + "# Using gaussian rng for new positions and Metropolis- Hastings \n", + "# No energy minimization\n", + "from math import exp, sqrt\n", + "from random import random, seed, normalvariate\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "from mpl_toolkits.mplot3d import Axes3D\n", + "from matplotlib import cm\n", + "from matplotlib.ticker import LinearLocator, FormatStrFormatter\n", + "import sys\n", + "\n", + "\n", + "# Trial wave function for the 2-electron quantum dot in two dims\n", + "def WaveFunction(r,alpha,beta):\n", + " r1 = r[0,0]**2 + r[0,1]**2\n", + " r2 = r[1,0]**2 + r[1,1]**2\n", + " r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)\n", + " deno = r12/(1+beta*r12)\n", + " return exp(-0.5*alpha*(r1+r2)+deno)\n", + "\n", + "# Local energy for the 2-electron quantum dot in two dims, using analytical local energy\n", + "def LocalEnergy(r,alpha,beta):\n", + " \n", + " r1 = (r[0,0]**2 + r[0,1]**2)\n", + " r2 = (r[1,0]**2 + r[1,1]**2)\n", + " r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)\n", + " deno = 1.0/(1+beta*r12)\n", + " deno2 = deno*deno\n", + " return 0.5*(1-alpha*alpha)*(r1 + r2) +2.0*alpha + 1.0/r12+deno2*(alpha*r12-deno2+2*beta*deno-1.0/r12)\n", "\n", - "That the maximum eigenvalue is 1 follows immediately from the property\n", - "that $\\sum_i M_{ik} = 1$. Similarly the minimum eigenvalue can be -1,\n", - "but only if $M_{kk} = 0$ and the magnitude of all the other elements\n", - "$\\psi_i^{\\rm min}$ of\n", - "the eigenvector that multiply nonzero elements $M_{ik}$ are -1.\n", + "# Setting up the quantum force for the two-electron quantum dot, recall that it is a vector\n", + "def QuantumForce(r,alpha,beta):\n", "\n", - "Let's first see what the properties of $M$ must be\n", - "to eliminate any -1 eigenvalues. \n", - "To have a -1 eigenvalue, the left eigenvector must contain only $\\pm 1$\n", - "and $0$ values. Taking in turn each $\\pm 1$ value as the maximum, so that\n", - "it corresponds to the index $k$, the nonzero $M_{ik}$ values must\n", - "correspond to $i$ index values of the eigenvector which have opposite\n", - "sign elements. That is, the $M$ matrix must break up into sets of\n", - "states that always make transitions from set A to set B ... back to set A.\n", - "In particular, there can be no rejections of these moves in the cycle\n", - "since the -1 eigenvalue requires $M_{kk}=0$. To guarantee no eigenvalues\n", - "with eigenvalue -1, we simply have to make sure that there are no\n", - "cycles among states. Notice that this is generally trivial since such\n", - "cycles cannot have any rejections at any stage. An example of such\n", - "a cycle is sampling a noninteracting Ising spin. If the transition is\n", - "taken to flip the spin, and the energy difference is zero, the Boltzmann\n", - "factor will not change and the move will always be accepted. The system\n", - "will simply flip from up to down to up to down ad infinitum. Including\n", - "a rejection probability or using a heat bath algorithm\n", - "immediately fixes the problem." + " qforce = np.zeros((NumberParticles,Dimension), np.double)\n", + " r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)\n", + " deno = 1.0/(1+beta*r12)\n", + " qforce[0,:] = -2*r[0,:]*alpha*(r[0,:]-r[1,:])*deno*deno/r12\n", + " qforce[1,:] = -2*r[1,:]*alpha*(r[1,:]-r[0,:])*deno*deno/r12\n", + " return qforce" ] }, { "cell_type": "markdown", - "id": "b112ed29", + "id": "7f0b0ab4", "metadata": { "editable": true }, "source": [ - "## Final Considerations I\n", - "\n", - "Next we need to make sure that there is only one left eigenvector\n", - "with eigenvalue 1. To get an eigenvalue 1, the left eigenvector must be \n", - "constructed from only ones and zeroes. It is straightforward to\n", - "see that a vector made up of\n", - "ones and zeroes can only be an eigenvector with eigenvalue 1 if the \n", - "matrix element $M_{ij} = 0$ for all cases where $\\psi_i \\neq \\psi_j$.\n", - "That is we can choose an index $i$ and take $\\psi_i = 1$.\n", - "We require all elements $\\psi_j$ where $M_{ij} \\neq 0$ to also have\n", - "the value $1$. Continuing we then require all elements $\\psi_\\ell$ $M_{j\\ell}$\n", - "to have value $1$. Only if the matrix $M$ can be put into block diagonal\n", - "form can there be more than one choice for the left eigenvector with\n", - "eigenvalue 1. We therefore require that the transition matrix not\n", - "be in block diagonal form. This simply means that we must choose\n", - "the transition probability so that we can get from any allowed state\n", - "to any other in a series of transitions." + "The Monte Carlo sampling includes now the Metropolis-Hastings algorithm, with the additional complication of having to evaluate the **quantum force** and the Green's function which is the solution of the Fokker-Planck equation." ] }, { - "cell_type": "markdown", - "id": "86c87966", + "cell_type": "code", + "execution_count": 3, + "id": "aaefb97f", "metadata": { + "collapsed": false, "editable": true }, + "outputs": [], "source": [ - "## Final Considerations II\n", + "# The Monte Carlo sampling with the Metropolis algo\n", + "def MonteCarloSampling():\n", + "\n", + " NumberMCcycles= 100000\n", + " # Parameters in the Fokker-Planck simulation of the quantum force\n", + " D = 0.5\n", + " TimeStep = 0.05\n", + " # positions\n", + " PositionOld = np.zeros((NumberParticles,Dimension), np.double)\n", + " PositionNew = np.zeros((NumberParticles,Dimension), np.double)\n", + " # Quantum force\n", + " QuantumForceOld = np.zeros((NumberParticles,Dimension), np.double)\n", + " QuantumForceNew = np.zeros((NumberParticles,Dimension), np.double)\n", "\n", - "Finally, we note that for a defective matrix, with more eigenvalues\n", - "than independent eigenvectors for eigenvalue 1,\n", - "the left and right\n", - "eigenvectors of eigenvalue 1 would be orthogonal.\n", - "Here the left eigenvector is all 1\n", - "except for states that can never be reached, and the right eigenvector\n", - "is $p_i > 0$ except for states that give zero probability. We already\n", - "require that we can reach\n", - "all states that contribute to $p_i$. Therefore the left and right\n", - "eigenvectors with eigenvalue 1 do not correspond to a defective sector\n", - "of the matrix and they are unique. The Metropolis algorithm therefore\n", - "converges exponentially to the desired distribution." + " # seed for rng generator \n", + " seed()\n", + " # start variational parameter loops, two parameters here\n", + " alpha = 0.9\n", + " for ia in range(MaxVariations):\n", + " alpha += .025\n", + " AlphaValues[ia] = alpha\n", + " beta = 0.2 \n", + " for jb in range(MaxVariations):\n", + " beta += .01\n", + " BetaValues[jb] = beta\n", + " energy = energy2 = 0.0\n", + " DeltaE = 0.0\n", + " #Initial position\n", + " for i in range(NumberParticles):\n", + " for j in range(Dimension):\n", + " PositionOld[i,j] = normalvariate(0.0,1.0)*sqrt(TimeStep)\n", + " wfold = WaveFunction(PositionOld,alpha,beta)\n", + " QuantumForceOld = QuantumForce(PositionOld,alpha, beta)\n", + "\n", + " #Loop over MC MCcycles\n", + " for MCcycle in range(NumberMCcycles):\n", + " #Trial position moving one particle at the time\n", + " for i in range(NumberParticles):\n", + " for j in range(Dimension):\n", + " PositionNew[i,j] = PositionOld[i,j]+normalvariate(0.0,1.0)*sqrt(TimeStep)+\\\n", + " QuantumForceOld[i,j]*TimeStep*D\n", + " wfnew = WaveFunction(PositionNew,alpha,beta)\n", + " QuantumForceNew = QuantumForce(PositionNew,alpha, beta)\n", + " GreensFunction = 0.0\n", + " for j in range(Dimension):\n", + " GreensFunction += 0.5*(QuantumForceOld[i,j]+QuantumForceNew[i,j])*\\\n", + "\t (D*TimeStep*0.5*(QuantumForceOld[i,j]-QuantumForceNew[i,j])-\\\n", + " PositionNew[i,j]+PositionOld[i,j])\n", + " \n", + " GreensFunction = exp(GreensFunction)\n", + " ProbabilityRatio = GreensFunction*wfnew**2/wfold**2\n", + " #Metropolis-Hastings test to see whether we accept the move\n", + " if random() <= ProbabilityRatio:\n", + " for j in range(Dimension):\n", + " PositionOld[i,j] = PositionNew[i,j]\n", + " QuantumForceOld[i,j] = QuantumForceNew[i,j]\n", + " wfold = wfnew\n", + " DeltaE = LocalEnergy(PositionOld,alpha,beta)\n", + " energy += DeltaE\n", + " energy2 += DeltaE**2\n", + " # We calculate mean, variance and error (no blocking applied)\n", + " energy /= NumberMCcycles\n", + " energy2 /= NumberMCcycles\n", + " variance = energy2 - energy**2\n", + " error = sqrt(variance/NumberMCcycles)\n", + " Energies[ia,jb] = energy \n", + " outfile.write('%f %f %f %f %f\\n' %(alpha,beta,energy,variance,error))\n", + " return Energies, AlphaValues, BetaValues" ] }, { "cell_type": "markdown", - "id": "9f0536ef", + "id": "f76c4763", "metadata": { "editable": true }, "source": [ - "## Final Considerations III\n", - "\n", - "The requirements for the transition $T_{i \\rightarrow j}$ are\n", - "* A series of transitions must let us to get from any allowed state to any other by a finite series of transitions.\n", - "\n", - "* The transitions cannot be grouped into sets of states, A, B, C ,... such that transitions from $A$ go to $B$, $B$ to $C$ etc and finally back to $A$. With condition (a) satisfied, this condition will always be satisfied if either $T_{i \\rightarrow i} \\neq 0$ or there are some rejected moves." + "The main part here contains the setup of the variational parameters, the energies and the variance." ] }, { - "cell_type": "markdown", - "id": "1e6cda97", + "cell_type": "code", + "execution_count": 4, + "id": "2235b362", "metadata": { + "collapsed": false, "editable": true }, + "outputs": [], "source": [ - "## Importance Sampling: Overview of what needs to be coded\n", - "\n", - "For a diffusion process characterized by a time-dependent probability density $P(x,t)$ in one dimension the Fokker-Planck\n", - "equation reads (for one particle /walker)" + "#Here starts the main program with variable declarations\n", + "NumberParticles = 2\n", + "Dimension = 2\n", + "MaxVariations = 10\n", + "Energies = np.zeros((MaxVariations,MaxVariations))\n", + "AlphaValues = np.zeros(MaxVariations)\n", + "BetaValues = np.zeros(MaxVariations)\n", + "(Energies, AlphaValues, BetaValues) = MonteCarloSampling()\n", + "outfile.close()\n", + "# Prepare for plots\n", + "fig = plt.figure()\n", + "ax = fig.gca(projection='3d')\n", + "# Plot the surface.\n", + "X, Y = np.meshgrid(AlphaValues, BetaValues)\n", + "surf = ax.plot_surface(X, Y, Energies,cmap=cm.coolwarm,linewidth=0, antialiased=False)\n", + "# Customize the z axis.\n", + "zmin = np.matrix(Energies).min()\n", + "zmax = np.matrix(Energies).max()\n", + "ax.set_zlim(zmin, zmax)\n", + "ax.set_xlabel(r'$\\alpha$')\n", + "ax.set_ylabel(r'$\\beta$')\n", + "ax.set_zlabel(r'$\\langle E \\rangle$')\n", + "ax.zaxis.set_major_locator(LinearLocator(10))\n", + "ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))\n", + "# Add a color bar which maps values to colors.\n", + "fig.colorbar(surf, shrink=0.5, aspect=5)\n", + "save_fig(\"QdotImportance\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "id": "1566692e", + "metadata": { + "editable": true + }, + "source": [ + "## Importance sampling, program elements\n", + "The general derivative formula of the Jastrow factor is (the subscript $C$ stands for Correlation)" + ] + }, + { + "cell_type": "markdown", + "id": "d5452fa8", + "metadata": { + "editable": true + }, + "source": [ + "$$\n", + "\\frac{1}{\\Psi_C}\\frac{\\partial \\Psi_C}{\\partial x_k} =\n", + "\\sum_{i=1}^{k-1}\\frac{\\partial g_{ik}}{\\partial x_k}\n", + "+\n", + "\\sum_{i=k+1}^{N}\\frac{\\partial g_{ki}}{\\partial x_k}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "600f3d4b", + "metadata": { + "editable": true + }, + "source": [ + "However, \n", + "with our written in way which can be reused later as" + ] + }, + { + "cell_type": "markdown", + "id": "88d739b4", + "metadata": { + "editable": true + }, + "source": [ + "$$\n", + "\\Psi_C=\\prod_{i< j}g(r_{ij})= \\exp{\\left\\{\\sum_{i 2$. \n", + "When $\\tau$ is much larger than the standard correlation time of \n", + "system then $M_n$ for $n > 2$ can normally be neglected.\n", + "This means that fluctuations become negligible at large time scales.\n", + "\n", + "If we neglect such terms we can rewrite the ESKC equation as" + ] + }, + { + "cell_type": "markdown", + "id": "3baf7e6d", + "metadata": { + "editable": true + }, + "source": [ + "$$\n", + "\\frac{\\partial W(\\mathbf{x},s|\\mathbf{x}_0)}{\\partial s}=\n", + "-\\frac{\\partial M_1W(\\mathbf{x},s|\\mathbf{x}_0)}{\\partial x}+\n", + "\\frac{1}{2}\\frac{\\partial^2 M_2W(\\mathbf{x},s|\\mathbf{x}_0)}{\\partial x^2}.\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "id": "99b5a7b6", + "metadata": { + "editable": true + }, + "source": [ + "## Importance sampling, Fokker-Planck and Langevin equations\n", + "In a more compact form we have" ] }, { "cell_type": "markdown", - "id": "cdb99106", + "id": "b2202f89", "metadata": { "editable": true }, "source": [ "$$\n", - "\\frac{\\partial P}{\\partial t} = D\\frac{\\partial }{\\partial x}\\left(\\frac{\\partial }{\\partial x} -F\\right)P(x,t),\n", + "\\frac{\\partial W}{\\partial s}=\n", + "-\\frac{\\partial M_1W}{\\partial x}+\n", + "\\frac{1}{2}\\frac{\\partial^2 M_2W}{\\partial x^2},\n", "$$" ] }, { "cell_type": "markdown", - "id": "16a7b9a6", + "id": "4a806f74", "metadata": { "editable": true }, "source": [ - "where $F$ is a drift term and $D$ is the diffusion coefficient." + "which is the Fokker-Planck equation! It is trivial to replace \n", + "position with velocity (momentum)." ] }, { "cell_type": "markdown", - "id": "7dfec96a", + "id": "f5e2836e", "metadata": { "editable": true }, "source": [ - "## Importance sampling\n", - "The new positions in coordinate space are given as the solutions of the Langevin equation using Euler's method, namely,\n", - "we go from the Langevin equation" + "## Importance sampling, Fokker-Planck and Langevin equations\n", + "**Langevin equation.**\n", + "\n", + "Consider a particle suspended in a liquid. On its path through the liquid it will continuously collide with the liquid molecules. Because on average the particle will collide more often on the front side than on the back side, it will experience a systematic force proportional with its velocity, and directed opposite to its velocity. Besides this systematic force the particle will experience a stochastic force $\\mathbf{F}(t)$. \n", + "The equations of motion are \n", + "* $\\frac{d\\mathbf{r}}{dt}=\\mathbf{v}$ and \n", + "\n", + "* $\\frac{d\\mathbf{v}}{dt}=-\\xi \\mathbf{v}+\\mathbf{F}$." + ] + }, + { + "cell_type": "markdown", + "id": "a39a29ba", + "metadata": { + "editable": true + }, + "source": [ + "## Importance sampling, Fokker-Planck and Langevin equations\n", + "**Langevin equation.**\n", + "\n", + "From hydrodynamics we know that the friction constant $\\xi$ is given by" ] }, { "cell_type": "markdown", - "id": "cbe85afa", + "id": "3749cd86", "metadata": { "editable": true }, "source": [ "$$\n", - "\\frac{\\partial x(t)}{\\partial t} = DF(x(t)) +\\eta,\n", + "\\xi =6\\pi \\eta a/m\n", "$$" ] }, { "cell_type": "markdown", - "id": "50c94801", + "id": "0d3ecaf9", "metadata": { "editable": true }, "source": [ - "with $\\eta$ a random variable,\n", - "yielding a new position" + "where $\\eta$ is the viscosity of the solvent and a is the radius of the particle .\n", + "\n", + "Solving the second equation in the previous slide we get" ] }, { "cell_type": "markdown", - "id": "9d132191", + "id": "f29ce820", "metadata": { "editable": true }, "source": [ "$$\n", - "y = x+DF(x)\\Delta t +\\xi\\sqrt{\\Delta t},\n", + "\\mathbf{v}(t)=\\mathbf{v}_{0}e^{-\\xi t}+\\int_{0}^{t}d\\tau e^{-\\xi (t-\\tau )}\\mathbf{F }(\\tau ).\n", "$$" ] }, { "cell_type": "markdown", - "id": "826fee92", + "id": "7e5386b1", "metadata": { "editable": true }, "source": [ - "where $\\xi$ is gaussian random variable and $\\Delta t$ is a chosen time step. \n", - "The quantity $D$ is, in atomic units, equal to $1/2$ and comes from the factor $1/2$ in the kinetic energy operator. Note that $\\Delta t$ is to be viewed as a parameter. Values of $\\Delta t \\in [0.001,0.01]$ yield in general rather stable values of the ground state energy." + "## Importance sampling, Fokker-Planck and Langevin equations\n", + "**Langevin equation.**\n", + "\n", + "If we want to get some useful information out of this, we have to average over all possible realizations of \n", + "$\\mathbf{F}(t)$, with the initial velocity as a condition. A useful quantity for example is" ] }, { "cell_type": "markdown", - "id": "9d94c0c9", + "id": "3fa432d3", "metadata": { "editable": true }, "source": [ - "## Importance sampling\n", - "The process of isotropic diffusion characterized by a time-dependent probability density $P(\\mathbf{x},t)$ obeys (as an approximation) the so-called Fokker-Planck equation" + "$$\n", + "\\langle \\mathbf{v}(t)\\cdot \\mathbf{v}(t)\\rangle_{\\mathbf{v}_{0}}=v_{0}^{-\\xi 2t}\n", + "+2\\int_{0}^{t}d\\tau e^{-\\xi (2t-\\tau)}\\mathbf{v}_{0}\\cdot \\langle \\mathbf{F}(\\tau )\\rangle_{\\mathbf{v}_{0}}\n", + "$$" ] }, { "cell_type": "markdown", - "id": "51f146f9", + "id": "a89e0187", "metadata": { "editable": true }, "source": [ "$$\n", - "\\frac{\\partial P}{\\partial t} = \\sum_i D\\frac{\\partial }{\\partial \\mathbf{x_i}}\\left(\\frac{\\partial }{\\partial \\mathbf{x_i}} -\\mathbf{F_i}\\right)P(\\mathbf{x},t),\n", + "+\\int_{0}^{t}d\\tau ^{\\prime }\\int_{0}^{t}d\\tau e^{-\\xi (2t-\\tau -\\tau ^{\\prime })}\n", + "\\langle \\mathbf{F}(\\tau )\\cdot \\mathbf{F}(\\tau ^{\\prime })\\rangle_{ \\mathbf{v}_{0}}.\n", "$$" ] }, { "cell_type": "markdown", - "id": "6b68d4ae", + "id": "af43ccf1", "metadata": { "editable": true }, "source": [ - "where $\\mathbf{F_i}$ is the $i^{th}$ component of the drift term (drift velocity) caused by an external potential, and $D$ is the diffusion coefficient. The convergence to a stationary probability density can be obtained by setting the left hand side to zero. The resulting equation will be satisfied if and only if all the terms of the sum are equal zero," + "## Importance sampling, Fokker-Planck and Langevin equations\n", + "**Langevin equation.**\n", + "\n", + "In order to continue we have to make some assumptions about the conditional averages of the stochastic forces. \n", + "In view of the chaotic character of the stochastic forces the following \n", + "assumptions seem to be appropriate" ] }, { "cell_type": "markdown", - "id": "64df5289", + "id": "c7831982", "metadata": { "editable": true }, "source": [ "$$\n", - "\\frac{\\partial^2 P}{\\partial {\\mathbf{x_i}^2}} = P\\frac{\\partial}{\\partial {\\mathbf{x_i}}}\\mathbf{F_i} + \\mathbf{F_i}\\frac{\\partial}{\\partial {\\mathbf{x_i}}}P.\n", + "\\langle \\mathbf{F}(t)\\rangle=0,\n", "$$" ] }, { "cell_type": "markdown", - "id": "d30017bd", + "id": "c9e0db2c", "metadata": { "editable": true }, "source": [ - "## Importance sampling\n", - "The drift vector should be of the form $\\mathbf{F} = g(\\mathbf{x}) \\frac{\\partial P}{\\partial \\mathbf{x}}$. Then," + "and" ] }, { "cell_type": "markdown", - "id": "9fe2dd41", + "id": "be1c9346", "metadata": { "editable": true }, "source": [ "$$\n", - "\\frac{\\partial^2 P}{\\partial {\\mathbf{x_i}^2}} = P\\frac{\\partial g}{\\partial P}\\left( \\frac{\\partial P}{\\partial {\\mathbf{x}_i}} \\right)^2 + P g \\frac{\\partial ^2 P}{\\partial {\\mathbf{x}_i^2}} + g \\left( \\frac{\\partial P}{\\partial {\\mathbf{x}_i}} \\right)^2.\n", + "\\langle \\mathbf{F}(t)\\cdot \\mathbf{F}(t^{\\prime })\\rangle_{\\mathbf{v}_{0}}= C_{\\mathbf{v}_{0}}\\delta (t-t^{\\prime }).\n", "$$" ] }, { "cell_type": "markdown", - "id": "4ff0a2ff", + "id": "86097828", "metadata": { "editable": true }, "source": [ - "The condition of stationary density means that the left hand side equals zero. In other words, the terms containing first and second derivatives have to cancel each other. It is possible only if $g = \\frac{1}{P}$, which yields" + "We omit the subscript $\\mathbf{v}_{0}$, when the quantity of interest turns out to be independent of $\\mathbf{v}_{0}$. Using the last three equations we get" ] }, { "cell_type": "markdown", - "id": "01260429", + "id": "a0422cbd", "metadata": { "editable": true }, "source": [ "$$\n", - "\\mathbf{F} = 2\\frac{1}{\\Psi_T}\\nabla\\Psi_T,\n", + "\\langle \\mathbf{v}(t)\\cdot \\mathbf{v}(t)\\rangle_{\\mathbf{v}_{0}}=v_{0}^{2}e^{-2\\xi t}+\\frac{C_{\\mathbf{v}_{0}}}{2\\xi }(1-e^{-2\\xi t}).\n", "$$" ] }, { "cell_type": "markdown", - "id": "44ca8225", + "id": "2c40aafe", "metadata": { "editable": true }, "source": [ - "which is known as the so-called *quantum force*. This term is responsible for pushing the walker towards regions of configuration space where the trial wave function is large, increasing the efficiency of the simulation in contrast to the Metropolis algorithm where the walker has the same probability of moving in every direction." + "For large t this should be equal to 3kT/m, from which it follows that" ] }, { "cell_type": "markdown", - "id": "071339e2", + "id": "4583d7b7", "metadata": { "editable": true }, "source": [ - "## Importance sampling\n", - "The Fokker-Planck equation yields a (the solution to the equation) transition probability given by the Green's function" + "$$\n", + "\\langle \\mathbf{F}(t)\\cdot \\mathbf{F}(t^{\\prime })\\rangle =6\\frac{kT}{m}\\xi \\delta (t-t^{\\prime }).\n", + "$$" ] }, { "cell_type": "markdown", - "id": "ffac4e59", + "id": "d2706393", "metadata": { "editable": true }, "source": [ - "$$\n", - "G(y,x,\\Delta t) = \\frac{1}{(4\\pi D\\Delta t)^{3N/2}} \\exp{\\left(-(y-x-D\\Delta t F(x))^2/4D\\Delta t\\right)}\n", - "$$" + "This result is called the fluctuation-dissipation theorem ." ] }, { "cell_type": "markdown", - "id": "bc0e382d", + "id": "319f884a", "metadata": { "editable": true }, "source": [ - "which in turn means that our brute force Metropolis algorithm" + "## Importance sampling, Fokker-Planck and Langevin equations\n", + "**Langevin equation.**\n", + "\n", + "Integrating" ] }, { "cell_type": "markdown", - "id": "da271d7c", + "id": "d83b9759", "metadata": { "editable": true }, "source": [ "$$\n", - "A(y,x) = \\mathrm{min}(1,q(y,x))),\n", + "\\mathbf{v}(t)=\\mathbf{v}_{0}e^{-\\xi t}+\\int_{0}^{t}d\\tau e^{-\\xi (t-\\tau )}\\mathbf{F }(\\tau ),\n", "$$" ] }, { "cell_type": "markdown", - "id": "1de86a0a", + "id": "b81d603b", "metadata": { "editable": true }, "source": [ - "with $q(y,x) = |\\Psi_T(y)|^2/|\\Psi_T(x)|^2$ is now replaced by the [Metropolis-Hastings algorithm](http://scitation.aip.org/content/aip/journal/jcp/21/6/10.1063/1.1699114) as well as [Hasting's article](http://biomet.oxfordjournals.org/content/57/1/97.abstract)," + "we get" ] }, { "cell_type": "markdown", - "id": "8d934b9e", + "id": "2273016a", "metadata": { "editable": true }, "source": [ "$$\n", - "q(y,x) = \\frac{G(x,y,\\Delta t)|\\Psi_T(y)|^2}{G(y,x,\\Delta t)|\\Psi_T(x)|^2}\n", + "\\mathbf{r}(t)=\\mathbf{r}_{0}+\\mathbf{v}_{0}\\frac{1}{\\xi }(1-e^{-\\xi t})+\n", + "\\int_0^td\\tau \\int_0^{\\tau}\\tau ^{\\prime } e^{-\\xi (\\tau -\\tau ^{\\prime })}\\mathbf{F}(\\tau ^{\\prime }),\n", "$$" ] }, { "cell_type": "markdown", - "id": "afcadcad", - "metadata": { - "editable": true - }, - "source": [ - "## Code example for the interacting case with importance sampling\n", - "\n", - "We are now ready to implement importance sampling. This is done here for the two-electron case with the Coulomb interaction, as in the previous example. We have two variational parameters $\\alpha$ and $\\beta$. After the set up of files" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "e6d3d711", + "id": "a8e9cad2", "metadata": { - "collapsed": false, "editable": true }, - "outputs": [], "source": [ - "# Common imports\n", - "import os\n", - "\n", - "# Where to save the figures and data files\n", - "PROJECT_ROOT_DIR = \"Results\"\n", - "FIGURE_ID = \"Results/FigureFiles\"\n", - "DATA_ID = \"Results/VMCQdotImportance\"\n", - "\n", - "if not os.path.exists(PROJECT_ROOT_DIR):\n", - " os.mkdir(PROJECT_ROOT_DIR)\n", - "\n", - "if not os.path.exists(FIGURE_ID):\n", - " os.makedirs(FIGURE_ID)\n", - "\n", - "if not os.path.exists(DATA_ID):\n", - " os.makedirs(DATA_ID)\n", - "\n", - "def image_path(fig_id):\n", - " return os.path.join(FIGURE_ID, fig_id)\n", - "\n", - "def data_path(dat_id):\n", - " return os.path.join(DATA_ID, dat_id)\n", - "\n", - "def save_fig(fig_id):\n", - " plt.savefig(image_path(fig_id) + \".png\", format='png')\n", - "\n", - "outfile = open(data_path(\"VMCQdotImportance.dat\"),'w')" + "from which we calculate the mean square displacement" ] }, { "cell_type": "markdown", - "id": "b8babb41", + "id": "283ed124", "metadata": { "editable": true }, "source": [ - "we move on to the set up of the trial wave function, the analytical expression for the local energy and the analytical expression for the quantum force." + "$$\n", + "\\langle ( \\mathbf{r}(t)-\\mathbf{r}_{0})^{2}\\rangle _{\\mathbf{v}_{0}}=\\frac{v_0^2}{\\xi}(1-e^{-\\xi t})^{2}+\\frac{3kT}{m\\xi ^{2}}(2\\xi t-3+4e^{-\\xi t}-e^{-2\\xi t}).\n", + "$$" ] }, { - "cell_type": "code", - "execution_count": 2, - "id": "77f36ed9", + "cell_type": "markdown", + "id": "5be6af5b", "metadata": { - "collapsed": false, "editable": true }, - "outputs": [], "source": [ - "%matplotlib inline\n", - "\n", - "# 2-electron VMC code for 2dim quantum dot with importance sampling\n", - "# Using gaussian rng for new positions and Metropolis- Hastings \n", - "# No energy minimization\n", - "from math import exp, sqrt\n", - "from random import random, seed, normalvariate\n", - "import numpy as np\n", - "import matplotlib.pyplot as plt\n", - "from mpl_toolkits.mplot3d import Axes3D\n", - "from matplotlib import cm\n", - "from matplotlib.ticker import LinearLocator, FormatStrFormatter\n", - "import sys\n", - "from numba import jit,njit\n", - "\n", - "\n", - "# Trial wave function for the 2-electron quantum dot in two dims\n", - "def WaveFunction(r,alpha,beta):\n", - " r1 = r[0,0]**2 + r[0,1]**2\n", - " r2 = r[1,0]**2 + r[1,1]**2\n", - " r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)\n", - " deno = r12/(1+beta*r12)\n", - " return exp(-0.5*alpha*(r1+r2)+deno)\n", - "\n", - "# Local energy for the 2-electron quantum dot in two dims, using analytical local energy\n", - "def LocalEnergy(r,alpha,beta):\n", - " \n", - " r1 = (r[0,0]**2 + r[0,1]**2)\n", - " r2 = (r[1,0]**2 + r[1,1]**2)\n", - " r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)\n", - " deno = 1.0/(1+beta*r12)\n", - " deno2 = deno*deno\n", - " return 0.5*(1-alpha*alpha)*(r1 + r2) +2.0*alpha + 1.0/r12+deno2*(alpha*r12-deno2+2*beta*deno-1.0/r12)\n", - "\n", - "# Setting up the quantum force for the two-electron quantum dot, recall that it is a vector\n", - "def QuantumForce(r,alpha,beta):\n", + "## Importance sampling, Fokker-Planck and Langevin equations\n", + "**Langevin equation.**\n", "\n", - " qforce = np.zeros((NumberParticles,Dimension), np.double)\n", - " r12 = sqrt((r[0,0]-r[1,0])**2 + (r[0,1]-r[1,1])**2)\n", - " deno = 1.0/(1+beta*r12)\n", - " qforce[0,:] = -2*r[0,:]*alpha*(r[0,:]-r[1,:])*deno*deno/r12\n", - " qforce[1,:] = -2*r[1,:]*alpha*(r[1,:]-r[0,:])*deno*deno/r12\n", - " return qforce" + "For very large $t$ this becomes" ] }, { "cell_type": "markdown", - "id": "70d37772", + "id": "656105ea", "metadata": { "editable": true }, "source": [ - "The Monte Carlo sampling includes now the Metropolis-Hastings algorithm, with the additional complication of having to evaluate the **quantum force** and the Green's function which is the solution of the Fokker-Planck equation." + "$$\n", + "\\langle (\\mathbf{r}(t)-\\mathbf{r}_{0})^{2}\\rangle =\\frac{6kT}{m\\xi }t\n", + "$$" ] }, { - "cell_type": "code", - "execution_count": 3, - "id": "a1c3f9e1", + "cell_type": "markdown", + "id": "8151af7c", "metadata": { - "collapsed": false, "editable": true }, - "outputs": [], "source": [ - "# The Monte Carlo sampling with the Metropolis algo\n", - "# jit decorator tells Numba to compile this function.\n", - "# The argument types will be inferred by Numba when function is called.\n", - "@jit()\n", - "def MonteCarloSampling():\n", - "\n", - " NumberMCcycles= 100000\n", - " # Parameters in the Fokker-Planck simulation of the quantum force\n", - " D = 0.5\n", - " TimeStep = 0.05\n", - " # positions\n", - " PositionOld = np.zeros((NumberParticles,Dimension), np.double)\n", - " PositionNew = np.zeros((NumberParticles,Dimension), np.double)\n", - " # Quantum force\n", - " QuantumForceOld = np.zeros((NumberParticles,Dimension), np.double)\n", - " QuantumForceNew = np.zeros((NumberParticles,Dimension), np.double)\n", - "\n", - " # seed for rng generator \n", - " seed()\n", - " # start variational parameter loops, two parameters here\n", - " alpha = 0.9\n", - " for ia in range(MaxVariations):\n", - " alpha += .025\n", - " AlphaValues[ia] = alpha\n", - " beta = 0.2 \n", - " for jb in range(MaxVariations):\n", - " beta += .01\n", - " BetaValues[jb] = beta\n", - " energy = energy2 = 0.0\n", - " DeltaE = 0.0\n", - " #Initial position\n", - " for i in range(NumberParticles):\n", - " for j in range(Dimension):\n", - " PositionOld[i,j] = normalvariate(0.0,1.0)*sqrt(TimeStep)\n", - " wfold = WaveFunction(PositionOld,alpha,beta)\n", - " QuantumForceOld = QuantumForce(PositionOld,alpha, beta)\n", - "\n", - " #Loop over MC MCcycles\n", - " for MCcycle in range(NumberMCcycles):\n", - " #Trial position moving one particle at the time\n", - " for i in range(NumberParticles):\n", - " for j in range(Dimension):\n", - " PositionNew[i,j] = PositionOld[i,j]+normalvariate(0.0,1.0)*sqrt(TimeStep)+\\\n", - " QuantumForceOld[i,j]*TimeStep*D\n", - " wfnew = WaveFunction(PositionNew,alpha,beta)\n", - " QuantumForceNew = QuantumForce(PositionNew,alpha, beta)\n", - " GreensFunction = 0.0\n", - " for j in range(Dimension):\n", - " GreensFunction += 0.5*(QuantumForceOld[i,j]+QuantumForceNew[i,j])*\\\n", - "\t (D*TimeStep*0.5*(QuantumForceOld[i,j]-QuantumForceNew[i,j])-\\\n", - " PositionNew[i,j]+PositionOld[i,j])\n", - " \n", - " GreensFunction = exp(GreensFunction)\n", - " ProbabilityRatio = GreensFunction*wfnew**2/wfold**2\n", - " #Metropolis-Hastings test to see whether we accept the move\n", - " if random() <= ProbabilityRatio:\n", - " for j in range(Dimension):\n", - " PositionOld[i,j] = PositionNew[i,j]\n", - " QuantumForceOld[i,j] = QuantumForceNew[i,j]\n", - " wfold = wfnew\n", - " DeltaE = LocalEnergy(PositionOld,alpha,beta)\n", - " energy += DeltaE\n", - " energy2 += DeltaE**2\n", - " # We calculate mean, variance and error (no blocking applied)\n", - " energy /= NumberMCcycles\n", - " energy2 /= NumberMCcycles\n", - " variance = energy2 - energy**2\n", - " error = sqrt(variance/NumberMCcycles)\n", - " Energies[ia,jb] = energy \n", - " outfile.write('%f %f %f %f %f\\n' %(alpha,beta,energy,variance,error))\n", - " return Energies, AlphaValues, BetaValues" + "from which we get the Einstein relation" ] }, { "cell_type": "markdown", - "id": "8e773c1b", + "id": "ec999f4c", "metadata": { "editable": true }, "source": [ - "The main part here contains the setup of the variational parameters, the energies and the variance." + "$$\n", + "D= \\frac{kT}{m\\xi }\n", + "$$" ] }, { - "cell_type": "code", - "execution_count": 4, - "id": "136120ac", + "cell_type": "markdown", + "id": "629910ed", "metadata": { - "collapsed": false, "editable": true }, - "outputs": [], "source": [ - "#Here starts the main program with variable declarations\n", - "NumberParticles = 2\n", - "Dimension = 2\n", - "MaxVariations = 10\n", - "Energies = np.zeros((MaxVariations,MaxVariations))\n", - "AlphaValues = np.zeros(MaxVariations)\n", - "BetaValues = np.zeros(MaxVariations)\n", - "(Energies, AlphaValues, BetaValues) = MonteCarloSampling()\n", - "outfile.close()\n", - "# Prepare for plots\n", - "fig = plt.figure()\n", - "ax = fig.gca(projection='3d')\n", - "# Plot the surface.\n", - "X, Y = np.meshgrid(AlphaValues, BetaValues)\n", - "surf = ax.plot_surface(X, Y, Energies,cmap=cm.coolwarm,linewidth=0, antialiased=False)\n", - "# Customize the z axis.\n", - "zmin = np.matrix(Energies).min()\n", - "zmax = np.matrix(Energies).max()\n", - "ax.set_zlim(zmin, zmax)\n", - "ax.set_xlabel(r'$\\alpha$')\n", - "ax.set_ylabel(r'$\\beta$')\n", - "ax.set_zlabel(r'$\\langle E \\rangle$')\n", - "ax.zaxis.set_major_locator(LinearLocator(10))\n", - "ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))\n", - "# Add a color bar which maps values to colors.\n", - "fig.colorbar(surf, shrink=0.5, aspect=5)\n", - "save_fig(\"QdotImportance\")\n", - "plt.show()" + "where we have used $\\langle (\\mathbf{r}(t)-\\mathbf{r}_{0})^{2}\\rangle =6Dt$." ] } ], diff --git a/doc/pub/week3/pdf/week3-beamer.pdf b/doc/pub/week3/pdf/week3-beamer.pdf index dd89903f..c4138cd4 100644 Binary files a/doc/pub/week3/pdf/week3-beamer.pdf and b/doc/pub/week3/pdf/week3-beamer.pdf differ diff --git a/doc/pub/week3/pdf/week3.pdf b/doc/pub/week3/pdf/week3.pdf index cd75ccf9..fde66f21 100644 Binary files a/doc/pub/week3/pdf/week3.pdf and b/doc/pub/week3/pdf/week3.pdf differ diff --git a/doc/src/week3/week3.do.txt b/doc/src/week3/week3.do.txt index cddb5f23..35c1aa53 100644 --- a/doc/src/week3/week3.do.txt +++ b/doc/src/week3/week3.do.txt @@ -1,12 +1,12 @@ TITLE: Week 5 January 29-February 2: Metropolis Algoritm and Markov Chains, Importance Sampling, Fokker-Planck and Langevin equations AUTHOR: Morten Hjorth-Jensen {copyright, 1999-present|CC BY-NC} Email morten.hjorth-jensen@fys.uio.no at Department of Physics and Center fo Computing in Science Education, University of Oslo, Oslo, Norway & Department of Physics and Astronomy and Facility for Rare Isotope Beams, Michigan State University, East Lansing, Michigan, USA -DATE: February 6-10 +DATE: February 2 !split -===== Overview of week 5 ===== +===== Overview of week 5, January 29-February 2 ===== !bblock Topics -* Markov Chain Monte Carlo +* Markov Chain Monte Carlo and repetition from last week * Metropolis-Hastings sampling and Importance Sampling !eblock @@ -14,299 +14,9 @@ DATE: February 6-10 * Overview video on "Metropolis algoritm":"https://www.youtube.com/watch?v=h1NOS_wxgGg&ab_channel=JeffPicton" * "Video of lecture tba":"https://youtu.be/" * "Handwritten notes tba":"https://github.com/CompPhysics/ComputationalPhysics2/blob/gh-pages/doc/HandWrittenNotes/2023/NotesFebruary2.pdf" -* See also "Lectures from FYS3150/4150 on the Metropolis Algorithm":"http://compphysics.github.io/ComputationalPhysics/doc/pub/rw/html/rw-bs.html" !eblock -!split -===== Basics of the Metropolis Algorithm ===== - - -The Metropolis et al. -algorithm was invented by Metropolis et. a -and is often simply called the Metropolis algorithm. -It is a method to sample a normalized probability -distribution by a stochastic process. We define ${\cal P}_i^{(n)}$ to -be the probability for finding the system in the state $i$ at step $n$. -The algorithm is then - -!split -===== The basic of the Metropolis Algorithm ===== - -* Sample a possible new state $j$ with some probability $T_{i\rightarrow j}$. -* Accept the new state $j$ with probability $A_{i \rightarrow j}$ and use it as the next sample. -* With probability $1-A_{i\rightarrow j}$ the move is rejected and the original state $i$ is used again as a sample. - - -We wish to derive the required properties of $T$ and $A$ such that -${\cal P}_i^{(n\rightarrow \infty)} \rightarrow p_i$ so that starting -from any distribution, the method converges to the correct distribution. -Note that the description here is for a discrete probability distribution. -Replacing probabilities $p_i$ with expressions like $p(x_i)dx_i$ will -take all of these over to the corresponding continuum expressions. - -!split -===== More on the Metropolis ===== - -The dynamical equation for ${\cal P}_i^{(n)}$ can be written directly from -the description above. The probability of being in the state $i$ at step $n$ -is given by the probability of being in any state $j$ at the previous step, -and making an accepted transition to $i$ added to the probability of -being in the state $i$, making a transition to any state $j$ and -rejecting the move: -!bt -\begin{equation} -\label{eq:eq1} -{\cal P}^{(n)}_i = \sum_j \left [ -{\cal P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i} -+{\cal P}^{(n-1)}_iT_{i\rightarrow j}\left ( 1- A_{i\rightarrow j} \right) -\right ] \,. -\end{equation} -!et - -!split -===== Metropolis algorithm, setting it up ===== -Since the probability of making some transition must be 1, -$\sum_j T_{i\rightarrow j} = 1$, and Eq. (ref{eq:eq1}) becomes - -!bt -\begin{equation} -{\cal P}^{(n)}_i = {\cal P}^{(n-1)}_i + - \sum_j \left [ -{\cal P}^{(n-1)}_jT_{j\rightarrow i} A_{j\rightarrow i} --{\cal P}^{(n-1)}_iT_{i\rightarrow j}A_{i\rightarrow j} -\right ] \,. -\end{equation} -!et - -!split -===== Metropolis continues ===== - -For large $n$ we require that ${\cal P}^{(n\rightarrow \infty)}_i = p_i$, -the desired probability distribution. Taking this limit, gives the -balance requirement - -!bt -\begin{equation} -\sum_j \left [p_jT_{j\rightarrow i} A_{j\rightarrow i}-p_iT_{i\rightarrow j}A_{i\rightarrow j} -\right ] = 0, -\end{equation} -!et - - -!split -===== Detailed Balance ===== - -The balance requirement is very weak. Typically the much stronger detailed -balance requirement is enforced, that is rather than the sum being -set to zero, we set each term separately to zero and use this -to determine the acceptance probabilities. Rearranging, the result is - -!bt -\begin{equation} -\frac{ A_{j\rightarrow i}}{A_{i\rightarrow j}} -= \frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}} \,. -\end{equation} -!et - -!split -===== More on Detailed Balance ===== - -The Metropolis choice is to maximize the $A$ values, that is - -!bt -\begin{equation} -A_{j \rightarrow i} = \min \left ( 1, -\frac{p_iT_{i\rightarrow j}}{ p_jT_{j\rightarrow i}}\right ). -\end{equation} -!et - -Other choices are possible, but they all correspond to multilplying -$A_{i\rightarrow j}$ and $A_{j\rightarrow i}$ by the same constant -smaller than unity. The penalty function method uses just such -a factor to compensate for $p_i$ that are evaluated stochastically -and are therefore noisy. - -Having chosen the acceptance probabilities, we have guaranteed that -if the ${\cal P}_i^{(n)}$ has equilibrated, that is if it is equal to $p_i$, -it will remain equilibrated. Next we need to find the circumstances for -convergence to equilibrium. - -!split -===== Dynamical Equation ===== - -The dynamical equation can be written as - -!bt -\begin{equation} -{\cal P}^{(n)}_i = \sum_j M_{ij}{\cal P}^{(n-1)}_j -\end{equation} -!et -with the matrix $M$ given by - -!bt -\begin{equation} -M_{ij} = \delta_{ij}\left [ 1 -\sum_k T_{i\rightarrow k} A_{i \rightarrow k} -\right ] + T_{j\rightarrow i} A_{j\rightarrow i} \,. -\end{equation} -!et - -Summing over $i$ shows that $\sum_i M_{ij} = 1$, and since -$\sum_k T_{i\rightarrow k} = 1$, and $A_{i \rightarrow k} \leq 1$, the -elements of the matrix satisfy $M_{ij} \geq 0$. The matrix $M$ is therefore -a stochastic matrix. - - -!split -===== Interpreting the Metropolis Algorithm ===== - -The Metropolis method is simply the power method for computing the -right eigenvector of $M$ with the largest magnitude eigenvalue. -By construction, the correct probability distribution is a right eigenvector -with eigenvalue 1. Therefore, for the Metropolis method to converge -to this result, we must show that $M$ has only one eigenvalue with this -magnitude, and all other eigenvalues are smaller. - -Even a defective matrix has at least one left and right eigenvector for -each eigenvalue. An example of a defective matrix is - - -!bt -\[ -\begin{bmatrix} -0 & 1\\ -0 & 0 \\ -\end{bmatrix}, -\] -!et -with two zero eigenvalues, only one right eigenvector - -!bt -\[ -\begin{bmatrix} -1 \\ -0\\ -\end{bmatrix} -\] -!et -and only one left eigenvector $(0\ 1)$. - -!split -===== Gershgorin bounds and Metropolis ===== - -The Gershgorin bounds for the eigenvalues can be derived by multiplying on -the left with the eigenvector with the maximum and minimum eigenvalues, - -!bt -\begin{align} -\sum_i \psi^{\rm max}_i M_{ij} =& \lambda_{\rm max} \psi^{\rm max}_j -\nonumber\\ -\sum_i \psi^{\rm min}_i M_{ij} =& \lambda_{\rm min} \psi^{\rm min}_j -\end{align} -!et - -!split -===== Normalizing the Eigenvectors ===== - -Next we choose the normalization of these eigenvectors so that the -largest element (or one of the equally largest elements) -has value 1. Let's call this element $k$, and -we can therefore bound the magnitude of the other elements to be less -than or equal to 1. -This leads to the inequalities, using the property that $M_{ij}\geq 0$, - -!bt -\begin{eqnarray} -\sum_i M_{ik} \leq \lambda_{\rm max} -\nonumber\\ -M_{kk}-\sum_{i \neq k} M_{ik} \geq \lambda_{\rm min} -\end{eqnarray} -!et -where the equality from the maximum -will occur only if the eigenvector takes the value 1 for all values of -$i$ where $M_{ik} \neq 0$, and the equality for the minimum will -occur only if the eigenvector takes the value -1 for all values of $i\neq k$ -where $M_{ik} \neq 0$. - - -!split -===== More Metropolis analysis ===== - -That the maximum eigenvalue is 1 follows immediately from the property -that $\sum_i M_{ik} = 1$. Similarly the minimum eigenvalue can be -1, -but only if $M_{kk} = 0$ and the magnitude of all the other elements -$\psi_i^{\rm min}$ of -the eigenvector that multiply nonzero elements $M_{ik}$ are -1. - -Let's first see what the properties of $M$ must be -to eliminate any -1 eigenvalues. -To have a -1 eigenvalue, the left eigenvector must contain only $\pm 1$ -and $0$ values. Taking in turn each $\pm 1$ value as the maximum, so that -it corresponds to the index $k$, the nonzero $M_{ik}$ values must -correspond to $i$ index values of the eigenvector which have opposite -sign elements. That is, the $M$ matrix must break up into sets of -states that always make transitions from set A to set B ... back to set A. -In particular, there can be no rejections of these moves in the cycle -since the -1 eigenvalue requires $M_{kk}=0$. To guarantee no eigenvalues -with eigenvalue -1, we simply have to make sure that there are no -cycles among states. Notice that this is generally trivial since such -cycles cannot have any rejections at any stage. An example of such -a cycle is sampling a noninteracting Ising spin. If the transition is -taken to flip the spin, and the energy difference is zero, the Boltzmann -factor will not change and the move will always be accepted. The system -will simply flip from up to down to up to down ad infinitum. Including -a rejection probability or using a heat bath algorithm -immediately fixes the problem. - -!split -===== Final Considerations I ===== - -Next we need to make sure that there is only one left eigenvector -with eigenvalue 1. To get an eigenvalue 1, the left eigenvector must be -constructed from only ones and zeroes. It is straightforward to -see that a vector made up of -ones and zeroes can only be an eigenvector with eigenvalue 1 if the -matrix element $M_{ij} = 0$ for all cases where $\psi_i \neq \psi_j$. -That is we can choose an index $i$ and take $\psi_i = 1$. -We require all elements $\psi_j$ where $M_{ij} \neq 0$ to also have -the value $1$. Continuing we then require all elements $\psi_\ell$ $M_{j\ell}$ -to have value $1$. Only if the matrix $M$ can be put into block diagonal -form can there be more than one choice for the left eigenvector with -eigenvalue 1. We therefore require that the transition matrix not -be in block diagonal form. This simply means that we must choose -the transition probability so that we can get from any allowed state -to any other in a series of transitions. - -!split -===== Final Considerations II ===== - - -Finally, we note that for a defective matrix, with more eigenvalues -than independent eigenvectors for eigenvalue 1, -the left and right -eigenvectors of eigenvalue 1 would be orthogonal. -Here the left eigenvector is all 1 -except for states that can never be reached, and the right eigenvector -is $p_i > 0$ except for states that give zero probability. We already -require that we can reach -all states that contribute to $p_i$. Therefore the left and right -eigenvectors with eigenvalue 1 do not correspond to a defective sector -of the matrix and they are unique. The Metropolis algorithm therefore -converges exponentially to the desired distribution. - - -!split -===== Final Considerations III ===== - -The requirements for the transition $T_{i \rightarrow j}$ are -* A series of transitions must let us to get from any allowed state to any other by a finite series of transitions. -* The transitions cannot be grouped into sets of states, A, B, C ,... such that transitions from $A$ go to $B$, $B$ to $C$ etc and finally back to $A$. With condition (a) satisfied, this condition will always be satisfied if either $T_{i \rightarrow i} \neq 0$ or there are some rejected moves. - - - - - - !split ===== Importance Sampling: Overview of what needs to be coded ===== @@ -457,7 +167,6 @@ from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import sys -from numba import jit,njit # Trial wave function for the 2-electron quantum dot in two dims @@ -493,9 +202,6 @@ The Monte Carlo sampling includes now the Metropolis-Hastings algorithm, with th !bc pycod # The Monte Carlo sampling with the Metropolis algo -# jit decorator tells Numba to compile this function. -# The argument types will be inferred by Numba when function is called. -@jit() def MonteCarloSampling(): NumberMCcycles= 100000 @@ -601,5 +307,884 @@ plt.show() +!split +===== Importance sampling, program elements ===== +!bblock +The general derivative formula of the Jastrow factor is (the subscript $C$ stands for Correlation) +!bt +\[ +\frac{1}{\Psi_C}\frac{\partial \Psi_C}{\partial x_k} = +\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} ++ +\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_k} +\] +!et +However, +with our written in way which can be reused later as +!bt +\[ +\Psi_C=\prod_{i< j}g(r_{ij})= \exp{\left\{\sum_{i 2$. +When $\tau$ is much larger than the standard correlation time of +system then $M_n$ for $n > 2$ can normally be neglected. +This means that fluctuations become negligible at large time scales. + +If we neglect such terms we can rewrite the ESKC equation as +!bt +\[ +\frac{\partial W(\mathbf{x},s|\mathbf{x}_0)}{\partial s}= +-\frac{\partial M_1W(\mathbf{x},s|\mathbf{x}_0)}{\partial x}+ +\frac{1}{2}\frac{\partial^2 M_2W(\mathbf{x},s|\mathbf{x}_0)}{\partial x^2}. +\] +!et +!eblock + + +!split +===== Importance sampling, Fokker-Planck and Langevin equations ===== +!bblock +In a more compact form we have +!bt +\[ +\frac{\partial W}{\partial s}= +-\frac{\partial M_1W}{\partial x}+ +\frac{1}{2}\frac{\partial^2 M_2W}{\partial x^2}, +\] +!et +which is the Fokker-Planck equation! It is trivial to replace +position with velocity (momentum). +!eblock + + +!split +===== Importance sampling, Fokker-Planck and Langevin equations ===== +!bblock Langevin equation +Consider a particle suspended in a liquid. On its path through the liquid it will continuously collide with the liquid molecules. Because on average the particle will collide more often on the front side than on the back side, it will experience a systematic force proportional with its velocity, and directed opposite to its velocity. Besides this systematic force the particle will experience a stochastic force $\mathbf{F}(t)$. +The equations of motion are +* $\frac{d\mathbf{r}}{dt}=\mathbf{v}$ and +* $\frac{d\mathbf{v}}{dt}=-\xi \mathbf{v}+\mathbf{F}$. +!eblock + + +!split +===== Importance sampling, Fokker-Planck and Langevin equations ===== +!bblock Langevin equation +From hydrodynamics we know that the friction constant $\xi$ is given by +!bt +\[ +\xi =6\pi \eta a/m +\] +!et +where $\eta$ is the viscosity of the solvent and a is the radius of the particle . + +Solving the second equation in the previous slide we get +!bt +\[ +\mathbf{v}(t)=\mathbf{v}_{0}e^{-\xi t}+\int_{0}^{t}d\tau e^{-\xi (t-\tau )}\mathbf{F }(\tau ). +\] +!et +!eblock + + +!split +===== Importance sampling, Fokker-Planck and Langevin equations ===== +!bblock Langevin equation +If we want to get some useful information out of this, we have to average over all possible realizations of +$\mathbf{F}(t)$, with the initial velocity as a condition. A useful quantity for example is +!bt +\[ +\langle \mathbf{v}(t)\cdot \mathbf{v}(t)\rangle_{\mathbf{v}_{0}}=v_{0}^{-\xi 2t} ++2\int_{0}^{t}d\tau e^{-\xi (2t-\tau)}\mathbf{v}_{0}\cdot \langle \mathbf{F}(\tau )\rangle_{\mathbf{v}_{0}} +\] +!et +!bt +\[ + +\int_{0}^{t}d\tau ^{\prime }\int_{0}^{t}d\tau e^{-\xi (2t-\tau -\tau ^{\prime })} +\langle \mathbf{F}(\tau )\cdot \mathbf{F}(\tau ^{\prime })\rangle_{ \mathbf{v}_{0}}. +\] +!et +!eblock + + + +!split +===== Importance sampling, Fokker-Planck and Langevin equations ===== +!bblock Langevin equation +In order to continue we have to make some assumptions about the conditional averages of the stochastic forces. +In view of the chaotic character of the stochastic forces the following +assumptions seem to be appropriate +!bt +\[ +\langle \mathbf{F}(t)\rangle=0, +\] +!et +and +!bt +\[ +\langle \mathbf{F}(t)\cdot \mathbf{F}(t^{\prime })\rangle_{\mathbf{v}_{0}}= C_{\mathbf{v}_{0}}\delta (t-t^{\prime }). +\] +!et + +We omit the subscript $\mathbf{v}_{0}$, when the quantity of interest turns out to be independent of $\mathbf{v}_{0}$. Using the last three equations we get +!bt + \[ +\langle \mathbf{v}(t)\cdot \mathbf{v}(t)\rangle_{\mathbf{v}_{0}}=v_{0}^{2}e^{-2\xi t}+\frac{C_{\mathbf{v}_{0}}}{2\xi }(1-e^{-2\xi t}). +\] +!et +For large t this should be equal to 3kT/m, from which it follows that +!bt +\[ +\langle \mathbf{F}(t)\cdot \mathbf{F}(t^{\prime })\rangle =6\frac{kT}{m}\xi \delta (t-t^{\prime }). +\] +!et +This result is called the fluctuation-dissipation theorem . +!eblock + + +!split +===== Importance sampling, Fokker-Planck and Langevin equations ===== +!bblock Langevin equation +Integrating +!bt + \[ +\mathbf{v}(t)=\mathbf{v}_{0}e^{-\xi t}+\int_{0}^{t}d\tau e^{-\xi (t-\tau )}\mathbf{F }(\tau ), +\] +!et +we get +!bt +\[ +\mathbf{r}(t)=\mathbf{r}_{0}+\mathbf{v}_{0}\frac{1}{\xi }(1-e^{-\xi t})+ +\int_0^td\tau \int_0^{\tau}\tau ^{\prime } e^{-\xi (\tau -\tau ^{\prime })}\mathbf{F}(\tau ^{\prime }), +\] +!et +from which we calculate the mean square displacement +!bt +\[ +\langle ( \mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle _{\mathbf{v}_{0}}=\frac{v_0^2}{\xi}(1-e^{-\xi t})^{2}+\frac{3kT}{m\xi ^{2}}(2\xi t-3+4e^{-\xi t}-e^{-2\xi t}). +\] +!et +!eblock + + +!split +===== Importance sampling, Fokker-Planck and Langevin equations ===== +!bblock Langevin equation +For very large $t$ this becomes +!bt +\[ +\langle (\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle =\frac{6kT}{m\xi }t +\] +!et +from which we get the Einstein relation +!bt + \[ +D= \frac{kT}{m\xi } +\] +!et +where we have used $\langle (\mathbf{r}(t)-\mathbf{r}_{0})^{2}\rangle =6Dt$. +!eblock +