-
Notifications
You must be signed in to change notification settings - Fork 1
/
ToolsAppendix.tex
91 lines (65 loc) · 18.8 KB
/
ToolsAppendix.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
\chapter{Calculational tools}
\label{ToolsAppendix}
\graphicspath{{Figures/ToolsAppendix/}{Figures/Common/}}
A significant part of any theoretical physics work is spent in the technical task of using computational tools. It is unsurprising therefore that just as experimental PhD candidates will spend a significant portion of their time building and designing the apparatus for their experiments, so too will a theorist spend time developing the codes that they need in the pursuit of their work. In this appendix I describe the tool that I have developed in the course of my PhD to simplify the process of developing these codes. %I would like to think that the work described here might be used beyond the term of my PhD and more widely than by myself. more widely find wider application beyond wider application
\parasep
A wide variety of simulations have been necessary in pursuit of the goals of this thesis. These have included simulations for the calculation of condensate ground states, the propagation of both Gross-Pitaevskii and Truncated Wigner atom laser models of different atomic species in systems of different dimensionality, the evolution of matrices describing quasiparticles, the evolution of a variety of optical pumping models, and the evolution of an evaporation-driven pumped atom laser. With the exception of the last of these simulations, which was written by \emph{Matthew Davis}, all of these simulations have been written by myself. The work necessary to hand-write all of these codes from scratch would have been prohibitive. Each would have needed independent testing, and the associated time fixing errors. Instead, codes for almost all of the simulations in this thesis were produced by the computational package \XMDS\ \citep{Collecutt:2001}, or its successor, \xpdeint. Each of these tools take a high-level description of a problem (an input script) and produce a fast low-level simulation (written in \texttt{C++}) that solves the problem. By testing these packages extensively, the common aspects of simulations can be generated reliably.
A large number of problems solved within the fields of quantum and atom optics are very similar on a mathematical level as they fall into the class of systems of initial-value (stochastic) partial differential equations. Due to the similarity of problems of this form, it is feasible to create packages such as \XMDS\ and \xpdeint\ which focus purely on solving this type of problem. These problems can also be solved with more general purpose tools like \texttt{MATLAB} and \texttt{Mathematica}, and these tools are ideal when the problem is sufficiently small that it may be solved with their built-in integrators. However, when the problem becomes sufficiently large that the overheads of these tools become significant, traditionally a hand-coded solution has been necessary. \XMDS\ and \xpdeint\ provide an alternative to writing such codes by hand. Both of these tools are free and open source, and available to anyone\footnote{See \url{www.xmds.org} for further information. Note that this website has not yet been updated (at the time of printing) to mention \xpdeint. Further information about \xpdeint\ may be obtained from \url{github.com/grahamdennis/xpdeint}.}.
\XMDS\ and \xpdeint\ particularly excel at providing a smooth transition from a low-dimensional simulation to a higher-dimensional one, from a deterministic simulation to a stochastic one, or from a single-processor simulation to a distributed simulation running in parallel across multiple computers (or on a supercomputer). In hand-written codes, unless they were initially written with such a potential future extension in mind, each such change would require significant effort in rewriting the code. In \XMDS\ and \xpdeint, such changes require only minimal change to the input script. This encourages users to create test simulations of a simpler system (e.g.\ reduced dimensionality), which makes the code run faster, allowing problems in the input script to be found and fixed more quickly. Later, the simulation can be scaled up to the full problem. Fundamentally, the ease with which codes can be generated encourages \emph{experimentation} with different types of simulations, as the time taken to create the code is no longer the rate-limiting factor.
\parasep
\XMDS\ was originally created in 2001 by \citet{Collecutt:2001}, and has been improved over the intervening years by a large number of contributors (including myself) adding a variety of integrators, interaction picture algorithms \citep{Caradoc-Davies:2000qy}, cross-propagation algorithms, and the ability to solve deterministic problems in parallel. Over this period of time, \XMDS\ has been employed for wide range of problems, including quantum optical information storage in two-level atoms \citep{Hetet:2008}, the influence of mobility on biodiversity \citep{Reichenbach:2007}, polarisation squeezing in optical fibres \citep{Corney:2006}, and quantum superchemistry in molecular Bose-Einstein condensates \citep{Hope:2001a}.
Since the development of \XMDS, there have been other problems to which it has been desired to apply \XMDS, but as it was not designed with such systems in mind, either less-than-satisfactory solutions were used, or a separate code was written by hand. A good example of such problems were simulations involving cylindrical or spherical symmetry. As \XMDS\ calculates spatial derivatives using Fourier transforms, the solution is necessarily assumed to be periodic over the computational domain. This is a complication when it is desired to use cylindrical or spherical symmetry as the solution near the origin is not necessarily similar in value to the solution at the outer edge of the computational domain (and would thus be in conflict with the assumed periodicity). This problem has been resolved by including an unphysical negative range to the radius (for example, this technique was used in \citep{Wuster:2005,Dall:2009}). This solution is unsatisfying as it increases by at least a factor of 2 the computational resources required to the solve the problem, reduces the order of convergence of the solution (when considered as a function of the number of spatial grid points used), and care must be taken in the choice of the grid to exclude the origin as a point. This step is necessary as the Laplacian operator in cylindrical and spherical coordinates contains terms involving inverse powers of the radius. Were the origin included as a point in the computational grid, it would be necessary to divide by zero.
The actual motivation for the creation of a successor to \XMDS\ was the need for an algorithm to solve the optical pumping model \eqref{OpticalPumping:GeneralMultimodeModel}, specifically an algorithm which could solve partial differential equations in time coupled with auxiliary equations which propagate in space in \emph{opposite} directions. A specific example of such a system is the $2 \hbar k$ momentum transfer `simple atom laser model' of \sectionref{OpticalPumping:SimpleModels:AtomLaserModel}. \XMDS\ implements an algorithm that can solve such auxiliary equations when they propagate in a single direction. This `cross-propagation' algorithm needed to be extended to implement the Alternating-Direction Implicit algorithm \citep{NumericalRecipes}, which can handle the more general case. While this algorithm could have been implemented in \XMDS, it would have required significant effort. For although \XMDS\ was a powerful tool, its internal workings had become somewhat of a tangled mess. \XMDS\ worked more as a result of the sheer effort of those who modified it than because it was designed in such a way as to make any extensions require an appropriate amount of effort. Nevertheless, the effort necessitated in adding features to \XMDS, without a doubt, saved more effort overall for \XMDS's users.
I felt that by designing and implementing a successor to \XMDS, I could not only address my specific problem and the other limitations discussed above, but could make it easier to extend in the future. This was no minor task as \XMDS\ contained approximately 38,000 lines of code. While none of \XMDS's code could be directly copied, the parts implementing the various integrators could be translated fairly simply, and some of the ideas in \XMDS\ could be directly implemented in \xpdeint, making the rewrite a slightly less daunting task. The current version of \xpdeint\ (at the time of printing) contains approximately 22,000 lines of code, and implements almost all of the features of \XMDS, while also improving upon it in several key areas. The features of \xpdeint\ are the subject of the remainder of this appendix.
\section{\xpdeint}
As discussed in the previous section, the class of problem that can be solved with \xpdeint\ (and \XMDS) is that of systems of initial-value (stochastic) partial differential equations. To solve such problems numerically, they must be restricted to a finite domain with boundary conditions imposed at the edges. \xpdeint\ discretises this problem by applying the pseudo-spectral method \citep{SpectralMethods}: the solution is decomposed as a weighted sum of a finite set of basis functions,
\begin{align}
f(x) &= \sum_{n=1}^{N} c_n g_n(x),
\end{align}
where $f(x)$ is the spatial representation of the solution, $g_n(x)$ are the basis functions, and $\{c_n\}$ are constants which form an equivalent representation of the solution (the \emph{spectral} representation \citep{SpectralMethods}). The choice of basis functions is significant as it determines the boundary conditions at the edge of the computational domain, and some sets of basis functions are more appropriate for some problems than others.
The most common problem that \xpdeint\ is applied to is the solution of Schrödinger-like equations. For such problems, the potential and $s$-wave scattering terms are local in the spatial representation, and are therefore accurately calculated with any set of basis functions. The kinetic energy term, being proportional to the Laplacian operator, does not share this property as knowledge of the solution in the region near a point is necessary to evaluate it. By choosing the basis functions to be the eigenfunctions of the Laplacian operator, the kinetic energy may be evaluated more accurately in the spectral representation. For example,
\begin{align}
\frac{- \hbar^2 \nabla^2}{2 M} f(x) &= \frac{-\hbar^2}{2 M}\sum_{n=1}^{N} \lambda_n c_n g_n(x),
\end{align}
where $\lambda_n$ is the eigenvalue for the Laplacian eigenfunction $g_n(x)$, i.e.\ $\nabla^2 g_n(x) = \lambda_n g_n(x)$. \xpdeint\ implements a range of different types of basis functions for use in problems exhibiting different types of symmetries: the Fourier basis for translation-invariant problems, the sine and cosine bases for problems with reflection symmetry, the Bessel and spherical-Bessel bases for problems with cylindrical and spherical symmetry, and the Hermite-Gauss basis where the basis functions are the harmonic oscillator eigenstates. This last basis is useful in (stochastic) projected-GP models in which it permits an implementation of a self-consistent energy cut-off \citep{Blakie:2008a}. Of these different types of bases, only the first, the Fourier basis, was available in \XMDS. The advantage of the Fourier basis is that it permits the use of the Fast Fourier Transform (FFT), a `fast' algorithm ($O(N \log N)$ operations for an $N$-point 1D Fourier transform) for transforming between the spatial and spectral representations. Similar algorithms may also be used for the sine and cosine bases, but all of the other bases require the use of a matrix multiplication to transform between the spatial and spectral representations. In contrast, matrix multiplication requires $O(N^2)$ operations for an $N$-point 1D transform. In practice, this cost can be reduced by a factor of two when every basis function has a definite parity (i.e.\ even or odd with respect to the origin, an example is the Hermite-Gauss basis), and such an algorithm is implemented in \xpdeint. Although the matrix multiplication transform is slower than the FFT, for sufficiently small numbers of grid points ($\lesssim 100$), the cost is not significantly larger, and may be outweighed by the better convergence afforded by using a more appropriate set of basis functions. This feature was used in \sectionref{Peaks:3DCalculation}, in which the Bessel basis was used in the solution of Gross-Pitaevskii and Truncated Wigner models of outcoupling from a cylindrically-symmetric He* condensate.
The details of pseudo-spectral methods and their implementation via Gaussian quadrature are beyond the scope of this appendix. This topic is discussed in significant depth elsewhere \citep{SpectralMethods}.
A second significant new feature included in \xpdeint\ is the capability to solve systems of coupled partial differential equations of different dimensionalities (previously, all solution quantities needed to have the same dimensionality). This is important for Hartree-Fock-Bogoliubov problems (see \sectionref{BackgroundTheory:BogoliubovMethods}), where the system is described by the mean-field $\Psi(\vect{x})$, and the correlation functions $G_{N}(\vect{x}, \vect{x}')$ and $G_A(\vect{x}, \vect{x}')$. This is also useful in the Gaussian phase-space method \citep{Drummond:2007,Drummond:2007a} (a generalisation of the phase-space methods that are described in \sectionref{BackgroundTheory:StochasticPhaseSpaceMethods}), which can be applied to fermions as well as bosons. This feature was used in \sectionref{Peaks:PerturbativeApproach} to solve for the dynamics of a matrix-valued function of wavenumber coupled to a zero-dimensional mean-field.
A third significant new feature included in \xpdeint\ is the capability to evaluate convolutions and other composite operations. Convolutions are of particular use in the evaluation of non-local atomic interaction terms in which the inter-particle potential only depends on the relative separation of the two particles. Such interaction terms may be efficiently evaluated with Fourier transforms by an application of the convolution theorem \citep{ArfkenWeber}. An important example of such an interaction is the dipole--dipole interaction \citep{Goral:2002}. Independently, this feature of \xpdeint\ has been applied to study the resonant Einstein de-Haas effect \citep{Gawryluk:2007} in metastable Helium \citep{Stevenson:2008}, an effect driven by the dipole--dipole interaction. In this thesis, this feature has been applied (although not in the form of calculating a convolution) in \sectionref{Peaks:3DCalculation} in the calculation of the momentum-density flux leaving the computational domain (refer to \sectionref{Peaks:AbsorbingBoundaryTricks}).
The motivating reason for developing \xpdeint, the need for an implementation of the Alternating-Direction Implicit algorithm \citep{NumericalRecipes}, has been extensively used in \chapterref{OpticalPumping}.
A list of other features of \xpdeint\ (all of which are shared with \XMDS) is given below.
\begin{itemize}
\item A variety of interchangeable integration algorithms including the classical fourth-order Runge-Kutta algorithm, a ninth-order Runge-Kutta algorithm, adaptive Runge-Kutta algorithms, and the midpoint method (a second-order semi-implicit algorithm).
\item The interaction-picture algorithm \citep{Caradoc-Davies:2000qy}. This algorithm is an application of the interaction-picture method in quantum mechanics to calculate exactly the contribution of one of the linear terms of the partial differential equation (typically the derivative component).
\item A semi-implicit algorithm for the solution of stochastic (partial) differential equations \citep{Drummond:1991,Werner:1997} with stochastic strong order 1 and deterministic order 2.
\item The capability to solve deterministic problems of two or greater dimensions in parallel across multiple computers. Different realisations of stochastic simulations may also be solved in parallel.
\item Quantification of discretisation error in time. With this option, \XMDS\ and \xpdeint\ solve the problem twice, once with the parameters specified, and once more with a smaller time-step (or in the case of adaptive algorithms, with a smaller tolerance). Both the finer solution, and the difference between the solutions are then output. While not a comprehensive guarantee of numerical convergence, this feature somewhat simplifies the process of validating numerical simulations.
\end{itemize}
\parasep
Outside of my own work, \xpdeint\ has been used in the stochastic simulation of conditional master equations \citep{Hush:2009}, the simulation of measurement feedback control of a Bose-Einstein condensate \citep{Szigeti:2009}, and the simulation of a number-phase Wigner representation \citep{Hush:2010}. \xpdeint\ is in use for a variety of other problems, the results of which have not yet been published.
\subsection{Tools and packages used by \xpdeint}
\xpdeint\ takes advantage of several tools and packages created by others, the most important of which are listed here.
\begin{itemize}
\item \xpdeint\ is written in Python \citep{Python} and much of the \texttt{C++} code is generated using Cheetah templates \citep{CheetahTemplates}.
\item The implementation of the Fast Fourier Transform is provided by \texttt{FFTW3} \citep{FFTW3}. This package also provides the functionality necessary to distribute deterministic simulations across multiple computers.
\item The Message Passing Interface (MPI) \citep{MPI} is used for communications between computers participating in a distributed computation.
\item Tools for reading and writing the primary input and output format, \texttt{HDF5}, are provided by the \texttt{HDF5} package \citep{HDF5}.
\end{itemize}
% While none of the techniques described in this appendix are new, the utility of \xpdeint\ is the ease with which such a wide range of algorithms and methods can be applied. While the generality cannot be compared with such tools as MATLAB and Mathematica, the advantage is in the focussed design and speed advantages (for large problems).
% \begin{itemize}
% % \item Coupled ODE-PDE systems
% % \item Tensor-valued equations (although not with very nice notation)
% % \item Convolutions
% % \item Non-local access (ability to calculate correlation functions)
% % \item (Multi-dimensional multi-directional) Cross-propagation
% \item Generates faster code than xmds
% % \item A variety of (mostly explicit) integrators
% % \item IP operator
% % \item Spectral method
% % \item Various basis functions (sine/cosine/Fourier/Bessel/Spherical Bessel/Hermite-Gauss) faster PMMT transform used where possible.
% % \item Gaussian quadrature
% % \item Stochastic integration (Gaussian and/or Poissonian noises)
% % \item Distributed simulations
% \item More descriptive errors
% \item More portable output format (HDF5)
% \item Diagnostics
% \end{itemize}