Machado Paulo PDF
Machado Paulo PDF
Mechanics
by
A Thesis
Doctor of Philosophy
McMaster University
McMaster University
Hamilton, Ontario
11
Abstract
IV
Acknowledgements
I would like to express my gratitude to my supervisor D. W. L. Sprung for the
thoughtful guidance, kindness, patience and constructive criticism that I enjoyed over
the years under his supervision.
I also thank Y. Nogami and D. Pelinovsky for their helpful suggestions and interest
in the work as well as the encouragement given by R. Dumont.
This work would not be possible without the positive learning environment and
financial support provided by McMaster University.
I would also like to thank Hannah Sprung, Rajat and Manju Bhaduri as well as
Natalia and Tanya Wolf, Walter Phillips and Errol Rennie for keeping me occupied
and making sure I didn't starve throughout the year.
Finally I would like to thank Jen and my Mom who helped me through the more
challenging moments.
Contents
List of Figures
List of Tables
xiv
1 Introduction
1.1
1.2
1.2.1
1.2.2
Bohmian Formulation
1.3
Historical Perspective . . . . .
1.4
2 Bohmian Mechanics
11
2.1
Hamilton-Jacobi Theory
11
2.2
13
Eulerian/Lagrangian Frames
14
2.2.1
2.3
15
2.4
Bohmian Trajectories .
16
2.4.1
Definition . . .
16
2.4.2
Important Properties
16
vi
2.5
18
2.6
19
2.8
3
21
2.7.1
From Wavefunction
21
2.7.2
22
Conservation of Probability
23
Calculations
3.1
Prelude: Units
25
25
26
26
3.3.1
Time Propagation
27
3.3.2
Boundary conditions
27
3.3.3
Calculation of Trajectories
29
3.3.4
Web Interface .
31
33
3.4.1
Particle Approaches .
33
3.4.2
34
3.4.3
35
3.4.4
36
3.4.5
Choice of Integrators . . . . .
36
37
39
3.6.1
39
3.6.2
41
3.6.3
Other Approaches
42
vii
Numerical Breakdown
45
4.1
46
4.2
Remedies
48
4.2.1
48
4.2.2
Fourier Residues
48
4.2.3
Spline Interpolation .
52
4.2.4
Smoothed Splines .
54
4.2.5
Train of Gaussians
55
4.3
Interference
59
4.3.1
Dynamic Resampling
62
4.3.2
Dynamic Timescale .
65
70
70
5.1.1
Table Description . . . . . . .
71
5.1.2
73
5.1.3
Harmonic Oscillator
.....
74
5.1.4
76
5.2
78
5.3
81
5.4
82
83
83
6.1.1
ADI Method . . . .
84
6.1.2
Boundary Conditions .
84
Vlll
6.2
87
6.3
Bohmian Methods in 2D . . . . . . . . . .
89
6.3.1
89
7.2
91
Cellular Automata . . . . . . .
91
7.1.1
92
93
7.2.1
93
7.3
96
7.4
96
7.4.1
Dirac Equation . . . . . .
97
7.4.2
Majorana Representation .
97
7.4.3
98
Numerical Experiments .
98
7.5.1
Algorithm . . . .
98
7.5.2
99
7.6
103
7.7
103
7.7.1
104
7.7.2
105
7.5
7.8
109
Comments
8 Conclusion
111
Bibliography
113
IX
List of Figures
2.1
17
2.2
21
2.3
23
28
29
30
30
31
32
40
42
43
46
47
47
49
49
50
4. 7
filtering
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
. . . . . . . . . .
53
54
55
56
. . . . .
4.12 Trajectories x(t) for the harmonic oscillator in the classical limit (t vs x) 58
4.13 Detail of effect of "turning on" the quantum potential, over a quarter
59
4.14 Avoided crossings between two Gaussians using sharply defined inter
60
polation kernels . . . . . . . .
4.15 Trajectory detail of Fig. 4.14
60
61
61
62
63
4.20 Evolution of the velocity profiles for two Gaussian packets with differ
ent separations (x vs v) . . . . . . . . . . . . . . . . . . . . . . . . . .
64
65
67
xi
67
4.24 Perturbation in secondary fields not yet apparent in the log probability 68
69
5.1
73
5.2
75
5.3
77
5.4
77
6.1
85
6.2
86
86
87
88
88
89
z-ax1s . . . . . . . . . . . . . . . . . . . . .
90
94
95
95
100
7.5 Mean position (left) and standard deviation (right) of the wavepacket
101
vs time . . . . . . . . . . . . .
xu
(right) . . . . . . . . . . . . .
101
102
103
xiii
List of Tables
3.1
25
3.2
37
5.1
.......... .
74
5.2
75
5.3
76
XIV
Chapter 1
Introduction
"Things on a very small scale behave like nothing that you have any
direct experience about. They do not behave like waves, they do not
behave like particles, they do not behave like clouds, or billiard balls, or
weights on springs, or like anything that you have ever seen."
Richard Feynman
CHAPTER 1. INTRODUCTION
1.1
1.2
That quantum theory "works" has long been settled by an impressive succession
of successful experimental predictions, and explanations, so its validity is no longer in
question. However the thorny issue of how the mathematical formulation of quantum
theory is to be interpreted is quite another matter. ..
As a clear sign of its inherent richness, the quantum theory accommodates very
CHAPTER 1. INTRODUCTION
4
Transactional interpretation (Cramer)
Last but not least: The Pilot Wave Formulation (de Broglie-Bohm)
To some, the Pilot Wave or Bohmian theory is one of those disconcerting formulations,
regressing as it were to classical concepts such as the moving point particle with a
definite momentum and position. One must not be prejudiced towards the theory
because of this, but use it as a tool, and like any tool use it where it works best.
The two formulations with which we will be concerned are the usual Schrodinger
representation or wavefunction formulation as it is the basis for common grid based
computational methods; and the Bohmian formulation as an alternative computa
tional route.
1.2.1
Erwin Schrodinger had hoped that this formulation would cast quantum mechanics
in a "congenial" and "intuitive" form but was distressed when he found that his wave
functions were set in configuration space and not in ordinary three-dimensional space
([1]).
In the Schrodinger representation of quantum mechanics we deal with a complex
wavefunction w(x, t) =< xl'll(t) > where the ket is in Hilbert space representing the
state of the system.
The time evolution of the wavefunction is obtained by solving the Time-Dependent
Schrodinger Equation (TDSE):
."'&w
'tn-
&t
(1.1)
1.2.2
Bohmian Formulation
CHAPTER 1. INTRODUCTION
It is only in the initial state when we assign these sampling points or "particles"
that probabilistic arguments are invoked and the rest of the theory is completely
deterministic thus earning the denomination "causal interpretation".
Following the initial setup, these point particles evolve in time according to forces
derived from the sum of the classical and a newly introduced quantum potential.
The quantum potential in turn depends on the particular distribution of the point
particles at a given time.
It is this circular self consistency of the theory that is responsible for all the
complex quantum behaviour it is able to describe (and also for some computing
headaches we may add).
1.3
Historical Perspective
The historical foundations of Bohmian mechanics can be traced back to the hydro
dynamical formulation of quantum mechanics due to de Broglie[2] [3] and Madelung
[4] in the early days of quantum theory (i.e. late 1920's to early 1930's).
Some basic aspects of pilot-wave theory were already anticipated in de Broglie's
thesis of 1924. His talk at the 5'th Solvay conference in 1927 included an almost
complete exposition of the theory (see [5]).
Probably because of immediate criticism of the theory by Pauli and others (as early
as the 1927 Solvay conference [5]) the pilot-wave theory of quantum mechanics was
promptly cast into oblivion until David Bohm independently developed and presented
it in the early 50's (see [6]).
David Bohm (from whom Bohmian mechanics takes the name) was incidentally a
man of many and varied contributions and not only to physics. We pay tribute to his
memory by quoting Diirr, Goldstein and Zanghi [7] celebrating the life of the then
CHAPTER 1. INTRODUCTION
and avoided this nonlocality?" That is the problem that [Bell's) Theorem
is addressed to. The theorem says: "No! Even if you are smarter than
Bohm, you will not get rid of nonlocality", that any sharp mathematical
formulation of what is going on will have that nonlocality... "
"In my opinion the picture which Bohm proposed then completely disposes
of all the arguments that you will find among the great founding fathers of
the subject- that in some way, quantum mechanics was a new departure of
human thought which necessitated the introduction of the observer, which
necessitated speculations about the role of consciousness, and so on."
"All those are simply refuted by Bohm's 1952 theory... So I think that it
is somewhat scandalous that this theory is so largely ignored in textbooks
and is simply ignored by most physicists. They don't know about it."
The historical record on the computational front, where Bohmian mechanics is used
in a more practical way as a basis for direct calculation of trajectories, is more recent.
Early works can be traced to the 1970's in the works of Weiner et al. [8). Then
after another long hiatus, the computational side of Bohmian mechanics was picked
up again in the nineties in works by Hua Wu and Sprung[9), Wyatt and Lopreore
[10], and by Sales Mayor, Askar and Rabitz [11) which set in motion another wave of
interest in the theory this time from a numerical perspective. A series of works then
followed over the following years up to the present day by many authors, including
those aforementioned and, amongst others, E. Bittner , B. Poirier , S. Garashchuk,
V. Rassolov, S. Goldstein and R.E. Wyatt.
1.4
Bohmian mechanics has never been a short of critics. At the 1927 Solvay confer
ence Pauli was the first of a long list of eminent physicists to criticize de Broglie's first
proposal of a Pilot-Wave theory. In 1952, Pauli again leveled criticism on the now
more developed de Broglie-Bohm theory in his contribution to de Broglie's 60th birth
day volume deeming it "artificial metaphysics" [12, 13] (Pauli's objections revolved
around the break in the correspondence between classical and Bohmian mechanics
pertaining to the symmetric treatment of canonically conjugate variables such as po
sition and momentum. For other historical criticism a good source is Myrvold's paper
[14]). Concerning other less damaging accusations, Peter Holland's book ([15]) does
a good job of summarizing and then shortly addressing them:
Cannot prove trajectories are real - Cannot prove empirically the completeness
postulate either.
Predicts nothing new - Does permit more detailed predictions pertaining to
individual processes.
Regression to classical physics - It does depend on the "state" of the whole
system, represented by the guiding wave (a very non-classical concept)
Non-locality- Non-locality seems to be a small price to pay if the alternative is
to forego any account of objective processes.
More complicated than quantum mechanics - It is just a reformulation of quan
tum mechanics
Counter intuitive- Quantum phenomena require quantum intuition
10
CHAPTER 1. INTRODUCTION
No reciprocal action of the particle on the wave - in fact the wave depends on
the positions of the representative particle's trajectories.
Chapter 2
Bohmian Mechanics
For a more exhaustive and complete theoretical treatment of Bohmian mechanics
the definitive reference is Peter Holland's book " The Quantum Theory of motion"
[15]. In this chapter we will limit ourselves to aspects of the theory that are relevant
to calculations in this thesis.
2.1
Hamilton-Jacobi Theory
We take the classical Hamilton-Jacobi equation as our starting point since Hamil
ton Jacobi theory in the shape of the Quantum Hamilton-Jacobi equation (QHJE)
figures so prominently in Bohmian mechanics.
as
at
(VS)2- V
2m
(2.1)
In fact it can be argued (see [16]) that the seed of wave mechanics and Bohmian
mechanics is contained in the relationship between classical Hamilton-Jacobi theory
and Geometrical Optics.
11
12
= ~.
(2.2)
which for slowly changing index of refraction n, has a solution of the form
A(r) exp(ik0 (L(r)- ct)) where A is the amplitude of the wavefunction and Lis called
the optical path length or eikonal of the wave. Substituting into eq. 2.2 results ,
in the short wavelength approximation, in the eikonal equation of geometrical optics
(\7 L) 2 = n 2 . The surfaces of constant L are surfaces of constant optical phase that
define wave fronts just as in the case of S from the classical Hamilton Jacobi equation.
Classical mechanics corresponds then to the geometrical optics limit of wave mo
tion. One might ask then what would happen to classical mechanics if we do not take
the short wavelength approximation. The answer would be the same as in the case of
wave motion: you would get the full wave equation. In the context of mechanics the
analog of the full wave equation is of course the celebrated the Schrodinger equation.
So could Hamilton or his peers have stumbled upon wave mechanics nearly 100
years before its actual discovery, with all the associated tantalizing scientific and
technological advances that such an early leap in human knowledge would carry?
Probably not... there was no reason to suspect that h in .\
= ~
than zero, and classical mechanics was deemed to be rigorously true. Experimental
evidence suggesting otherwise would not be available until the beginning of the XXth
century in experiments such as those of Davisson and Germer, and others.
2.2
13
The central element in the Bohmian picture is the concept of particle with an
associated (definite) position and momentum. One can obtain the Bohmian equations
of motion by writing the wavefunction in polar form: 'll(x, t)
After some manipulations leads to two equations arising from the real and imagi
nary parts respectively ([15])
&R2
8t
as=
8t
+ 'V(R2\1S)
m
_(VS)
2m
= 0
+.!f._ v 2 R
(2.3)
2mR
whole of the wavefunction, thus the name pilot wave theory, as Bohmian mechanics
is sometimes referred to (the wave guides the particle).
This is the usual way of interpreting that extra term as an non-classical potential.
Alternatively (and equivalently) we can consider the - 2n~ v~R as being another con
tribution to the kinetic energy (~~t, a form of a shape induced internal energy akin
to the role that the internal stress tensor plays in a classical fluid dynamics context.
Quickly glancing at the quantum term, already we can make a few preliminary
14
observations. We note that as
we should expect to get classical behaviour from the system. Because R appears both
in the denominator and in the numerator (as its gradient) Q will not depend on the
magnitude of R but on its curvature. When evaluating it numerically, however, we
can anticipate some difficulties in places where R is zero or very small.
2.2.1
Eulerian/Lagrangian Frames
+ 'Vv
in time with the moving fluid particles as opposed to the Eulerian view where the
grid is fixed in space.
Taking the gradient of the Quantum Hamilton Jacobi equation (QHJE) eq 2.3,
with p = R 2 and working in the Lagrangian frame of reference, we obtain a more
familiar set of equations([15]):
-ftp + p'Vv = 0
~~
-~'V(V
Q=
+ Q)
_!f._ V' 2R
2m
(2.4)
15
Here we can still recognize the first as the continuity equation, but now the second
equation reveals itself as just a statement of Newton's second law F
= ma
with the
2.3
the system and it incorporates both properties of the particle such as momentum
and position, and properties of the whole system such as potential and observing
apparatus.
n,2 \72R
Q=---2m R
(2.5)
It is quite peculiar in that it can vary rapidly in places where V, the classical
potential, is almost constant. Even more strange is that since we have a normalizing
factor in the denominator, it does not depend on the magnitude of the density p itself,
but rather on its curvature (or more appropriately on the curvature of its square root
R), so we could have a very large quantum potential in an area where you have a
negligible probability density (as is the case for instance of interference between two
Gaussians as we shall see in later chapters). This can be seen by scaling R by an
2
= _..!E._ \7 R
2m
Numerically speaking, given that in general p can vary exponentially, and thus
has an enormous dynamic range, computationally it will be beneficial to represent it
instead by its logarithm C
also for instance in the case of Gaussian wavepackets we are going from
ea 2 x +a 1 x+ao
16
to a2x 2 + a1 x
So with p
+ ao
(2.6)
We can see how, as a consequence of the presence of the differential operators \7 and
\7 2 , the quantum potential encodes at each point information about its neighbours
2.4
2.4.1
Bohmian Trajectories
Definition
The Bohmian trajectories are determined, just as in classical mechanics, from the
velocity field generated from S according to the definition v
= ;: .
At each point the particle moves perpendicular (parallel to the gradient) to the
wavefront of S.
2.4.2
Important Properties
In Bohmian mechanics trajectories may not cross or even touch each other. If they
were to cross, at the crossing point two "particles" with distinct momenta would share
17
the same space-time point which would imply that the underlying wavefunction should
have two distinct values at the same point in space. Since the wavefunction is single
valued, trajectory crossings are forbidden (see [15) and [9]). So if we see trajectories
crossing we immediately know that something is wrong in our calculations.
Furthermore at nodes of the pilot wavefunction W, the phase S is undefined and
there will be no particle paths going through those points , instead we may have
avoided crossings and or vortices (see [9]) develop around these points as illustrated
in Fig. 2.1. Fig. 2.1 represents the steady state of a 2D L-shaped quantum wire at
a particular energy (in this case JtJ ~ 0.25). Solid lines enclose regions containing
2
vortices and the inset shows behaviour near a stationary point of the velocity, where
the flow divides. At nodes S is no longer single valued and may undergo discrete
jumps: Sn = S + 21rnn where n is an integer. The contour integral of v around a node
will be
f v.dl = f;;: dl = f dS =
These sudden jumps in S will give rise to big gradients and can cause numerical
instability in algorithms, commonly referred to the node problem.
To illustrate the above, we use a compact Gaussian as the envelope of our ini
18
tial wavepacket. It is positive definite everywhere thereby making the node problem
aforementioned less likely to occur. The reason for that resides with the continu
ity equation which determines evolution of p(x, t) = p(x 0 , 0) exp(-
J; \7v(x(t) , t)dt)
dt < A>=<
8A
>
(2.7)
here the expectation value < A > stands for the average value of measurements of
the observable A in a state 17/J > . By taking A to be the position operator x or
momentum operator p, we obtain Ehrenfest 's theorem, the equations of motion for
their mean values. (eq. 2.8)
Ji
dt
<
>=
<p>
m
(2.8)
We can then use eqs. 2.8 which agree with the classical equations of motion, to
monitor the quality of our numerical solution by monitoring the average position and
momentum along the trajectories.
tion
The ultimate test for any physical theory is agreement with experiment so we duly
note that both standard quantum theory and Bohmian mechanics predict the same
lw(r0 , t 0 )l 2
(where t 0
is the initial time and r 0 is some surface on which the boundary conditions are ap
plied). From the solution of the TDSE, everything else will follow and the trajectories
will retain their statistical weight through time, yielding measurements that are also
probabilistic in nature.
20
when calculating lifetimes of particles when we want to know the probability a par
ticular wavefunction will be contained by some potential. In the trajectory view one
simply counts the number of particles/ trajectories that have crossed a predetermined
boundary.
Note that when using Bohmian trajectories we can determine with no ambiguity
which part of the wavepacket passes and which part remains within the barrier. For
instance we can also see in Fig. 2.2 that most of the transmittance is due to the front
of the wavepacket and not the rear. This is something unthinkable in conventional
quantum theory where the wavefunction is taken as a whole, and such information
would be unavailable.
2. 7. CALCULATING TRAJECTORIES
21
r(t)
~----~----~----~--~
10
11
12
13
14
2. 7
2.7.1
Calculating Trajectories
From Wavefunction
Calculating trajectories from the wavefunction w(x, t) is really a two step process:
1. Solve the time dependent Schrodinger equation (TDSE).
2. Integrate the velocity field determined by the phase of the newly calculated W.
We can solve the TDSE analytically for a relatively small number of simple systems
( see [15, 19]).
For the vast majority of cases that cannot be solved analytically, with the advent
of cheap and powerful computing, a suitable time propagation method can be used
to numerically solve the TDSE in low dimensional systems.
There are many methods that can be used to solve the TDSE. We can for instance
project the initial wavefunction onto stationary states of the system's Hamiltonian,
and then evolve these according to their eigen-energies (spectral methods).
22
We can discretize the TDSE on a fixed space-time grid. This can be done using
FFT methods, Feynman Path integrals, Monte Carlo techniques, or operator splitting
methods, etc. We use the latter in a Crank-Nicholson scheme as represented in eq.
2.10 in conjunction with transparent boundary conditions as described in Moyer's
paper ([20]) and in appendix A to setup a "golden" standard to which we can compare
the various particle method implementations.
w(r, t + bt)
-zH8t
e-n-
(2.9)
(2.10)
In fact, we should note that this simple Crank-Nicholson method for 1D problems,
has recently been generalized to higher orders described recently by W. van Dijk and
F. M. Toyama (see [21]). They have obtained a dramatic improvement in attainable
precision, for a given amount of computation.
2.7.2
In this method we directly use the Bohmian framework to make numerical cal
culations. The trajectories come out naturally as a constituent of the calculating
procedure.
The direct Bohmian method is a more efficient way of calculating trajectories and
because we use grids that are much sparser than those used in wave based methods,
it usually is orders of magnitude faster. However it is vulnerable to numerical insta
bilities when interference effects are dominant. We will give more details on this in
the next chapter.
23
2.8
Conservation of Probability
p~~
+ J~,
+ J( -p\lv)
= 0
which is always zero. So it will remain constant in time, that is to say that each
particle will have the volume "assigned to it" expand and compress as the probability
density decreases or increases respectively, but it always represents the same prob
ability volume. This is illustrated on Fig. 2.3, the inital state has two Gaussians
placed symmetrically about the origin. As the wave packets spread they interfere
but the trajectories do not cross. The shaded volume represents the same probability
volume 1/N at all times and initially experiences expansion as the probability density
decreases while the Gaussian packet is expanding, but then experiences compression
when the Gaussian packet "meets" trajectories from the neighbouring packet, giving
rise to an increased probability density.
This probability volume can be used to effectively label each particle (as is done
24
in [22]) since each one can easily (at least in lD) be assigned a cumulative probability
that will remain constant throughout time: P(xi, t)
= f~oo
p(x', t)dx'
In fact, in the Newtonian code of Sec. 3.4.2, which is how we will select our particle
ensemble from w(x, 0): we will place a particle whenever the cumulative probability
has increased by 1/N where N is the chosen number of particles. Thus between two
trajectories there will always be the same amount of probability
exception of the two particles at the extreme boundaries.
-ft
Chapter 3
Calculations
3.1
Prelude: Units
n= m
+ Vw
On Table 3.1 for convenience, we include the conversion factors between the atomic
system of units and the SI system.
n=1
a0 = 1
me= 1
Eh = 1
Description
i; Planck's constant
Bohr Radius
electron's mass
Hartree energy
SI units
1.054571 * 10 34 J s
5.2917 * w- 11 m
9.109382 * w- 31 kg
4.35974 * 10 lS J
II
CHAPTER 3. CALCULATIONS
26
3.2
Why use a time dependent method? Since inception of quantum theory, time
independent methods have always been preferred as being more tractable and efficient,
and in most cases synthesizing the relevant physical processes taking place in a system.
Obviously if we are dealing with a time dependent Hamiltonian we have no choice
but to use a time dependent method. However even in the case of a time independent
Hamiltonian it can sometimes be advantageous to use a time dependent method.
To illustrate this let us consider spectral methods. The standard way these work
is by projecting the initial wavefunction onto the eigenvector basis (stationary states)
of the system, and then evolve in time a linear combination of these projections.
(3.1)
with ak =< klw(O) >and Hk = Ekk
This is often the preferred route since initially we deal with space only and not
space and time. However in some cases (i.e. scattering) the eigenstates become con
tinuum functions making the discrete sum in equation 3.1 an integral over continuum
states and if the calculation is required at very large times, it may be difficult to
perform the Fourier integrals with sufficient accuracy. In such cases a time dependent
method may be advantageous, an early example of which is in Heller's work ([24]).
3.3
As we saw at the end of the previous chapter, Bohmian trajectories can be calcu
lated a posteriori, after the full time evolution of the wavefunction 'll(x, t) is known.
Since, apart from a handful of cases, analytical solutions are generally not available we
27
must turn to calculating the time evolution of the system via some standard method
that is known to work. One such method, augmented by transparent boundary con
ditions, is described in the next sections.
3.3.1
Time Propagation
e-iHlit =
~~:Z~~j~
+ 0( <5t 3 )
e-tH 8t ~
which is second
order accurate, and more importantly unitary. The Numerov algorithm is used to
extend the accuracy in the spatial domain to fifth order.
3.3.2
Boundary conditions
Transparent Boundary Conditions (TBC) are very convenient in that they allow
us to concentrate on a small volume with great detail where the "interesting" physics
may be happening, without worrying about unwanted wave reflections from artificial
boundaries. Complex potentials are sometimes used as an absorbing layer on the
boundaries to avoid having to enlarge the computing domain, but they never work
perfectly.
In our case TBCs are also essential for easy comparisons between wave and particle
methods, as TBCs are inherent to the Bohmian method (that is one of its advantages).
28
CHAPTER 3. CALCULATIONS
200
400
600
~
800
1000
::..
o>
o>
~ 1200
1400
1600
1800
2000
50
100
150
200
250
300
350
X From -40 to 40 (nm)
400
450
500
TBCs for different kinds of differential equations are discussed by Matthias Ehrhardt
m [25]. In particular discrete TBCs in one dimension for the Schrodinger equation
were derived in [26] and an adaptation to the Numerov method is presented by Moyer
([20]).
We can observe the time evolution of a free Gaussian with some initial momentum
to the right , in Fig. 3.1; the packet spreads in time but no reflection occurs when it
hits the domain walls because TBCs are enabled.
We should note that implementation of TBCs comes at a cost of the minor incon
venience that we need to record the wavefunction at the boundaries at each time step,
and store this history. This has an increasing performance cost as time progresses.
The reason these values are needed , is that at the system boundaries, there are fluxes
of probability in both directions, and it is not valid to suppose purely outgoing flux,
even when the net flux is to the right.
29
3.3.3
Calculation of Trajectories
where sudden jumps of 27r are required for continuity. These are indicate a crossing
of the ordinate axis as illustrated by Fig. 3.4 . If we were to take just the gradient
of the phase provided by the computer code, we would be presented with unphysical
velocity jumps at those points, resulting in erroneous trajectories.
30
CHAPTER 3. CALCULATIONS
Figure 3.3: Time evolution of the wavepacket incident on a double barrier potential,
from the left.
( otice the transmitted and metastable states to the right and in the middle
respectively)
60,----,---~-~-~-~----,
so
40
30
20
-1
_, .
10
-3
_, OL_------,1-',0
0--2,-',00------,'-300,------~
40-0---,50~0_
_JSOO
-lo o;------;,~oo--,:;;;oo;-------;;:
3o:-o-~.o;:-o---;so=o-----:!.
soo
31
50
40
_./,
/
30
/"
,'
"
/,..
.
/.
/~
20
./
10
'E
.s
-10
-20
-30
-40
-50
100
200
300
400
500
600
T1mestep ( 1 Is)
700
800
900
1000
Figure 3.5: Bohmian trajectories corresponding to the wave function of Fig. 3.3
( viewing angle is rotated by 90 degrees, time is now x axis)
Once we have the correctly calculated velocities we can proceed to integrate the
position x
J vdt
from v
3.3.4
Web Interface
We should add that, for convenience, at the time we developed a website that took
as inputs the various parameters of a calculation such as dimensions, time steps, po
tential , initial wavepacket, etc and then fed these to a background running program.
After a calculation related delay, a webpage with both the wavefunction solution and
32
CHAPTER 3. CALCULATIONS
Location
~dlt
~ew
,G.o
S,ook:marl<s
Iools
.S.ettlngs ,Window
;:J
t:felp
-If..
G L9_cation [f-"'httpiil27o~O~ld/web;e~dy~~P..(tm\_-
____ ~-
__
d _-,!j\\GL
I
' GMSMASSES (relative to effective mass In GaAs).ll
SIZE (In# cells):
TIMESTEPS:
1512_____
DELTA: (fsec)
r-"'""
iirx;-io)&CX<-s)+rx>Sll
i:
Submit job j
33
methods.
3.4
Bohmian Calculations
3.4.1
Particle Approaches
J:.oo p(y)dy) , thus increasing by one order the derivatives required for
the Q calculation. We will implement both methods of keeping track of the probability
CHAPTER 3. CALCULATIONS
34
3.4.2
ftp+ p\lv = 0
dv
dt
= _..!.\J(V
m
Q=
+ Q)
(3.2)
_.!E._ V'zR
2m
Start with a statistical ensemble representative of the initial state of the system
{Xi, Vi,
Calculate Q from {xi,~} or {xi, Pi} depending on the labeling method used
(more on this later).
Update vi via the gradient of V
Update xt from vt (x
+Q
(~~ = -~\l(V
+ Q)).
~~).
1f =
p 0 e-'V.v
At this point we have advanced by <5t in time and the whole process is repeated.
If we want to avoid dealing with P (and its 3rd order derivatives for Q) we can use
p instead, provided we update it at each time step using the continuity equation as
3.4.3
35
~ + \7 (p '::)
as=
at
_C'i1S)2 _
2m
= ';;:
(V + Q)
(3.3)
=_.ft._ V'2R
2m R
Start with a statistical ensemble representative of the initial state of the system
Calculate Q from
{Xi,
Pi}
= -
<~~
(V
+ Q)
Update
Xi
from
+ (V + Q) J )
Y'S
vi
ot in time and
S)jm
CHAPTER 3. CALCULATIONS
36
3.4.4
At any time we can recover the standard wavefunction view of the system ([8]) by
integrating both phase and density along a trajectory .
The evolved density is determined by integrating the equation of continuity:
p(x, t)
S(x, t) = S(x, 0) +
J(dt
dS
)dt = S(x, 0)
+ J(
(\7S)
m - (V
2
+ Q))dt
w(x, t)
3.4.5
(3.4)
Choice of Integrators
Given the equations of motion the question arises, what kind of integrator to use
to evolve these in time. Since the time integration will be at the core of our procedure,
special care in terms of accuracy and efficiency is required in picking the numerical
integrator to use.
One other consideration is that since, as we can see from Sec.
3.4.2, we are
I Coefficient I I
ak
bk
k=l
k=2
k=3
2+2!+T!
1-2-3-2--3
1-2-3-2--3
~
2-23
~
1-23
37
k=4
2+2!+T-3
6
~
2-23
Table 3.2: Coefficient values for method S4a at each intermediate step k
(reproduced from (28)[29])
to choose one from that class.
To help us do that we refer to the paper by Gray (1994) (28] where he compares
a range of symplectic integrators as well as more standard ones such as fourth order
Runge-Kutta, in terms of their speed and conservation properties in a molecular
dynamics context.
We selected the fourth order method S4a (which was originally introduced in (29])
because from all the methods examined it provided the best balance between speed
and accuracy. The coefficients used, ak and bk, are taken from (28] and listed in table
3.2. They are used to integrate the equations of motion as directed by eq. 3.5.
&H
.
p=-aq
{
&H
(3.5)
q = &p
3.5
We have seen in Sec. 2.4 that each particle carries with it a predetermined amount
of probability. We usually pick ~ for each trajectory when using the Newtonian form
of the Bohmian equations. In the QHJE case we often use an initially uniform grid
CHAPTER 3. CALCULATIONS
38
in which each probability volume will be set but different for each trajectory. In
either case the cumulative probability that it carries (at least in lD) unequivocally
identifies the particle and can be used to label it. When required we can calculate the
probability density asp= ~~ which follows from the definition of P(x) = f~oo p(y)dy
Depending then on which variables we choose to work with (they are going to
vary as with different methods of interpolation), Q the quantum potential and FQ
the quantum force will be given by:
or, with R =
Vf5,
(3.6)
(3.7)
In terms of P,
(3.8)
(3.9)
Or instead, using p and determining its logarithm we obtain with C = log R =
~ logp:
(3.10)
39
(3.11)
This logarithmic form has a number of advantages:
It more naturally represents an exponentially decaying probability.
It avoids the problem of dividing by a possibly very small number .
Last but not least, C lends itself more naturally to a polynomial interpolation
than p (i.e. for a Gaussian C is a quadratic polynomial).
The non-locality of Q is patent in the V' and \7 2 terms and presents a significant
numerical challenge. In the next few sections we will present some ways of extracting
their values from our discrete set of trajectories.
3.6
Interpolation Schemes
3.6.1
We are given a set of discrete points and are asked to calculate gradients, the
simplest and more intuitive choice is to approximate these points by continuous poly
nomial functions which can easily be differentiated.
This interpolation scheme yields the most local approximations to Q and in fact
for the case of 1 or 2 neighbouring points it will reduce to the common two and three
point formulas for the first derivative, respectively.
Given a particle and its N-1 nearest neighbours at distances
~Xn
with values
+ ...
where
Cn
~! ~ lx=xo
c1 + c2 x
Yn
40
CHAPTER 3. CALCULATIONS
( the underlying is a simple parabola with only one offset point near x=5)
L}.xl
(L}.x 1)
L}.x2
(L}.x2)
2
2
(L}.x 1)
(L}.x 2)
3
3
Y1
(3.12)
L}.x3
Y4
As the book Numerical Recipes [31] reminds us, this is a Vandermonde Matrix
which can be quite ill-conditioned, so we can expect the coefficients thus obtained to
be undesirably sensitive to small variations in the data points.
Instability is especially obvious when a large number of points is involved in the
polynomial interpolation. This is illustrated in Fig. 3. 7 where we start with a perfect
parabolic distribution of points and slightly shift only one point (near 5). We can
observe how a high degree polynomial fit tends to oscillate wildly around the interpo
lated points as it tries to pass exactly through each and every one of them (Runge's
Phenomenon).
Lagrange interpolation thus has some desirable characteristics such as simplicity
41
and the fact that it hits each point exactly but as one makes it less local (by including
more neighbouring points in the interpolation) it becomes less and less stable.
One should note as well that given the form of eq. 3.11 and in particular its
\1 3 C term that the polynomial used should be of at least fourth degree in order to
contribute to that term in the quantum force. This observation holds for other types
of interpolation that depend on polynomials such as the one discussed in the next
section.
3.6.2
We noted before that Q is non-local, so one can attempt to capture more of this
non-locality by including a larger number of nearest neighbours in the polynomial
interpolation. Unfortunately as we saw in Sec. 3.6.1 and illustrated by Fig. 3.7 the
polynomial interpolation does not do well with many points, becoming hypersensitive
to noise.
If we increase the number of points in the interpolation without increasing the
Again we get a linear system similar to eq. 3.12 but now for M points, we have
a MxN matrix with M > N which we can solve using for instance the SVD method
(Singular Value Decomposition, see [56) ). This is now a semi-local evaluation of the
derivatives since we are taking information from more and more distant neighbouring
particles when evaluating gradients.
CHAPTER 3. CALCULATIONS
42
-35,'---'----'--"'--___.,.._....___.__
__..__---:------',0
3.6.3
Other Approaches
43
i -05~:t
-1~--~----~--~----~--~----~--~--~
-2000
-1500
-1000
-500
1000
500
1500
2000
space
a_:-:f--,-:: _,
1
200
400
600
800
i:r : , , :
0
200
400
600
800
1000
t1me
1200
1400
:;?J
1000
step
1200
1400
1600
1800
2000
Figure 3.9: Example of free packet evolution using least squares interpolation
44
CHAPTER 3. CALCULATIONS
As a testimony to the computational versatility of the Bohmian interpretation,
Chapter 4
Numerical Breakdown
The particle method works quite well and is known to be stable for Gaussian
packets in free space, linear and quadratic potentials, where the time evolution of
the wavefunction is shape preserving (these are incidentally, cases where analytical
solutions also exist).
However, if we introduce a minute cubic or quartic term in the potential or a
barrier, the system will, given enough time steps, become unstable, eventually leading
to nonsensical conditions such as trajectory crossings, the telltale sign that something
has gone wrong.
In this chapter we will explore the causes and conditions that affect the stability
of the particle method. We will be very interested in what role numerical instabili
ties caused by interference play in this, and finally we will introduce some alternate
ways of evaluating the quantum potential that may have better stability and fidelity
properties.
45
46
4.1
Referring back to Sec. 2.4.2, we recall that one of the important properties men
tioned was that particle trajectories should not cross; this is a consequence of the
single-valuedness of the underlying wavefunction. If we do observe crossings, we can
be assured that something has gone wrong with the calculations that led to those
crossings (see Fig. 4.1).
When two trajectories approach each other in a sudden manner, it becomes diffi
cult to reliably estimate p from ~~. Furthermore because of the denominator term in
Q, when
Q ( and high
gradients) which in turn lead to high particle velocities which may then catapult a
particle towards another. In fact because p
= ~~
and thus p becomes infinite, or in numerical terms "diverges", that can be seen by the
sudden scattering of particles in Fig. 4.1.
In the real world there has to be a force that prevents the trajectories from getting
too close. This is the quantum force term (Sec. 3.5), and a trajectory crossing is an
indication of too big a time step or a badly calculated quantum force (i.e. insufficient
magnitude or pointing in the wrong direction).
To illustrate how this could happen, let us take the example of the least squares
47
48
from getting excessively close to each other: The proper interpolation of Q and a
time step small enough to enable the particles to respond to Q and decelerate. The
time step issue will be addressed in the next Sec. 4.3.2.
4.2
4.2.1
Remedies
Artificial Friction Term
= -
will obviously destroy any energy conservation properties so it is certainly not a way
to get the proper time evolution of the system. However a slight amount of artificial
friction can be beneficial to the stability of the simulation and may be introduced so
as to dampen spurious numerical oscillations that typically occur in the evaluation of
gradients. If the friction is low enough, we can do this without completely changing
the character of the system.
If one takes this to an extreme, and increases the friction too much, the system
will evolve into its ground state. The ground state in a Bohmian context is peculiar in
that all movement stops as the particles remain static at the bottom of the potential.
In Fig. 4.4 we can see the system relaxing towards the ground state of a potential
well.
4.2.2
Fourier Residues
4.2. REMEDIES
49
space
<:E---------~
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
3000
3500
4000
4500
5000
tune
1500
2000
2500
slep
From top to bottom: Phase space, time-evolution and dynamic time scale
(note the convergence to the ground state, where the Bohmian particles remain
motionless)
50
"
"'~
"
''
,-~~~-
Bottom: Corresponding -Q
urally from the use of Gaussian wavepackets (where the log density is a quadratic
form) and the fact that when we have potential reflection we should expect interfer
ence terms to start developing in the density function (see Fig. 4.6).
These interference terms are characteristically oscillatory in nature and thus fa
vorably inclined to be represented by a Fourier expansion.
Mathematically the approximation to the density with an oscillatory residue is:
The polynomial (of order m) part of C, the log density, can be determined for
51
4.2. REMEDIES
x 1o
6
4
0
-2
-4
50
100
150
x10
Figure 4.7: The residue of a particular wavefunction and the result oflow pass filtering
instance in a least squares way.
C(x)- Polym(x) is calculated and can be expanded in a (discrete) sine series (DST)
as in equation 4.1.
y(k) -
1, ... , N
(4.1)
Recalling equation 3.11, in order to calculate the quantum force we will require
C',C"and C 111 :
52
Since we are now working in Fourier space, at this point if desired, we can lowpass filter the residue in momentum space (Fig. 4. 7) by adjusting the coefficients
of Res(k), and then make use of the differentiating property of Fourier transforms
F('/!J
C' -
Poly'+ F- 1 (ik.Res(k))
C"
C 111
(4.2)
Here F- 1 is the inverse discrete sine transform (IDST) that takes us back to position
space and is the reverse of eq. 4.1:
x(n)
L.k= 1y(k)
sm(1r N
kn
+ ), n
1
1, ... , K
(4.3)
C(o)
oscillate for points near the edge, giving rise to problems in future time steps.
4.2.3
Spline Interpolation
We have seen in Sec. 3.6.2 that the least squares method, although less local,
smoother and more reasonable looking than the polynomial interpolation, will not
prevent trajectory crossings because it doesn't pass through every point.
4.2. REMEDIES
53
Figure 4.8: Comparison of two interpolation methods with same arbitrarily dis
tributed points
(The spline method is distinctly more physical)
54
4.2.4
Smoothed Splines
55
4.2. REMEDIES
w(j)(C- SS(x)) 2
+ (1- p)
)..(x)('\7 2 SS) 2 dx
j=l
The parameter p will control how smooth the spline will be by emphasizing the
point distance (p- 1) or the low gradients (p
---t
at sites x 1 and w and ).. are site dependent weights. We can, by adjusting this local
parameter p, generate a smoother interpolation of C near the edges while remaining
close to the original splines in the middle points.
4.2.5
Train of Gaussians
This alternate interpolation scheme takes its inspiration from the field of digital
signal processing, in particular in the Shannon sampling of a discrete signal, and
specifically of an irregularly sampled discrete signal as our Bohmian points really are.
Because the density is at all times positive definite we should enforce this by
substituting the sine function from Shannon sampling which allows for negative values
56
(the solid line is the resulting p and arrows represent sampling points )
with a Gaussian shape that is convolved at each sample point represented by a delta
function. This procedure is illustrated graphically in Fig. 4.11.
Normally each point represents the same probability volume, thus the Gaussians
will have the same volume, resulting in sharper distributions when points are closer
together and more diffuse ones when points are far apart. Mathematically what we are
doing is convolving a series of delta functions (the particle locations) with Gaussians
of varying standard deviation but the same area (thus the different heights in Fig.
4.11).
p(x) -
1 N
N'L.n=
1 6(x- Xn)
p(x)
* Gn ( x, ern)
where Gn(x, ern) represents a Gaussian centered at 0 with a standard deviation of ern.
4.2. REMEDIES
57
(4.4)
Recalling the form of the quantum force when using p as in eq. 3. 7 we are going
to need terms of the form
the Gaussians, it is just a (laborious) matter of calculating Gn,G~,G~ and G~' and
combining these according to eq. 3.7 to get an analytic form of Q for a given particle
distribution:
G' (x, a)
G" (x, a)
G111 (x, a)
Having the different derivatives of G we can now express the needed terms of p (x)
as:
1 N
N~n=lGn (x- Xn,an)
p (x)
Vp(x)
p (x)
a2 N p (x)
V 2 p (x)
p (x)
\13p (x)
p (x)
Finally we can plug these values in the quantum potential or the quantum force
58
Figure 4.12: Trajectories x(t) for the harmonic oscillator in the classical limit (t vs
x)
as required:
Classical limit from vanishing Q The classical limit is approached whenever the
system's action is large compared to nor alternatively Q
---+
0.
By turning off the quantum potential we can appreciate the differences between
the classical and quantum worlds.
In Figure 4.12, which represents an harmonic oscillator, we can see that with Q = 0
we have trajectory crossings and every particle follows a sinusoidal path independent
of the other particles.
Fig.
4.3. INTERFERENCE
59
16
02
1B
(a) Classical
04
06
08
12
14
ta
Figure 4.13: Detail of effect of "turning on" the quantum potential, over a quarter
period of the oscillation (t vs x)
Bohmian trajectories. In this case we use the train of Gaussians interpolation method,
but with somewhat sharper density distributions than usual.
Looking at Fig. 4.15 it is apparent that we don't actually have trajectory crossings
but if we add a property of indistinguishability to the neighbouring particles those
trajectories would appear to cross when we zoom out back to Fig. 4.14 . This is
speculative but it would seem that Bohmian mechanics would, in this way, enable a
natural emergence of classical behaviour.
4.3
Interference
"The double slit experiment has in it the heart of quantum mechanics. In
reality it contains the only mystery.."
Richard Feynman
How does interference arise in our bohmian context? To study that, we are going
to use two Gaussians freely evolving in space and eventually interfering together. To
60
lmeartme
Figure 4.14: Avoided crossings between two Gaussians using sharply defined interpo
lation kernels
-2
-6
-8
1DO
200
300
400
800
61
4.3. INTERFERENCE
50
100
150
200
250
1~
1=
1 1 1
(In this figure the space and time axes have been interchanged as compared to Fig.
4.16)
obtain the proper trajectories, we first calculate the wavefunction using the techniques
of Sec. 3.3. In Fig. 4.16 we can see regions of compression (in red) and expansion (in
blue) typical of interference patterns emerging as time passes.
In the frames of Fig. 4.19 we can readily see one reason why it may be difficult to
capture this behaviour using the traditional particle treatment: insufficient sampling.
62
E'
.s
><
20
40
60
80
100
120
140
160
T1mestep ( 1 Is)
The interference emerges from a zone of negligible density where there are very few
particles encoding for the log-density field. In a density weighted particle distribution
for this case, the majority of the particles will initially be far away from the zone where
interference develops and so will not be able to account for it, some resampling of that
area will be in order if we want to capture that information earlier in the simulation.
4.3.1
Dynamic Resampling
associated discontinuity in the velocity fields of Fig. 4.20 that are going to self feed
in time and grow into the interference patterns of the density distribution.
There is no mystery as to how a trajectory on the far right "knows" that there is
a trajectory on the far left, that information is carried by the quantum potential and
it has its origin in that small bump in Q between the two Gaussians.
63
4.3. INTERFERENCE
lt:S?:1l\CY1
.
- -. - . . . . . - -. - . .
. .
i~:rrIIioc
- - -
(~) t~3oo
- - (d)
t~2odb
"
Figure 4.19: Evolution of the log-density (upper graphs) and quantum potential Q
(lower graphs) at selected times.
On each panel, representing a particular timestep, the log density and corresponding
Q are drawn versus space
64
.....
Figure 4.20: Evolution of the velocity profiles for two Gaussian packets with different
separations (x vs v)
(Left panels: the Gaussians are initially centered at 7, and right panels, 15 units
of distance)
4.3. INTERFERENCE
65
(a) linear
ix.
4.3.2
~x,
~x
in this region.
Dynamic Timescale
In order to estimate the error in our calculation at the end of each time step, we
compare the results (as measured by the particle positions or more accurately by the
66
gradients of a field like S or C) with the errors obtained by taking two half steps. This
way we can monitor the error and maintain it below some predetermined threshold
by making dynamical adjustments to the time step. At the end of the process we get
to keep the more accurate two half step result so the extra calculations involved are
not completely wasted.
In our particular calculations we are cutting the time step in half in case of too
large an error and increasing it by 1% if the error is acceptable, this enables the time
step to "recover" from very low values in regions where such is possible, as illustrated
in Fig. 4.22.
In the case of a Gaussian packet in a parabolic potential, we get a decreasing time
step as the packet approaches the bottom of the potential where it has maximum
momentum and highest density of trajectories (and therefore where we need a smaller
time step).
Following this point of maximum trajectory density, circa step 750 in Fig. 4.22,
the time step gradually recovers to larger values accelerating the simulation.
Furthermore if a more serious error condition is detected such as a trajectory
crossing (as opposed to the error simply exceeding some threshold), the time step
is again cut in half, and in addition we restore the system to the state of some
predetermined number of time steps in the past, to enable it to avoid an evolution to
the serious error condition.
In Fig. 4.24 we can see the reason why we have to go back a couple of time steps
in order to correct a trajectory crossing. It is that usually, by the time it becomes
apparent that the positions of the particles are erroneous, the background fields (such
as Q and S ) have been compromised for quite a while. Indeed some interpolation
methods (i.e. splines) are prone to develop nasty oscillatory perturbations by the
4.3. INTERFERENCE
67
-5
-10
-15
-20
-25 o'-----=-'2oo.,---4:7:oo,----=GOO-'-::----:'so':-o-1c:':ooo.,---::':-:---:-:'::--:':::---c:-:':-:-~
bme
llC::: .. d
0
200
400
600
BOO
1000
1200
1400
1600
1800
2000
step
Figure 4.22: Dynamic timescale, abrupt declines represent time step adjustments
caused by error conditions
Top:Space-time graph
Bottom:Time step versus time
l1neartJme
(Using the recorded time steps from Fig. 4.22 we can convert the space-time graph
back to a linear time scale where the Harmonic oscillator trajectories are more easily
recognized)
68
10
Figure 4.24: Perturbation in secondary fields not yet apparent in the log probability
Clockwise from top left: log probability C , Q , \7 2 8 and \7 8
edges that will self-feed after many time steps and give rise to trajectory crossings.
Because of this lag in cause and effect, we must use different monitoring functions for
different methods.
Areas that are troublesome when evaluating derivatives with splines, are the com
pression areas, where the trajectories are coming closer together and thus the density
is increasing leading to higher numerical error in the differentiation. These areas can
be identified by \7 2 8 < 0 or equivalently \7v < 0, that area is readily visible in the
right bottom panel of Fig. 4.24.
A useful monitoring function for problematic zones that does a good job of catching
unstable conditions early on, is the time evolution of the quantum potential,
69
4.3. INTERFERENCE
025
0~
015
01
005
-005
-01
-015
-02
-025
_,;
_,;
-2
Figure 4.25: High frequency oscillations in \1 4 S for the system of Fig. 4.24
which we can for error monitoring purposes approximate by its more problematic first
term \1 4 S that is represented in Fig. 4.25. By comparing all fields represented in that
figure and the previous one, we can see an illustration of the lag in the propagation of
instabilities as these are quite evident in \14 S as high frequency spacial oscillations and
are already somewhat developed in Q of Fig. 4.24 but they are not yet recognizable
in C at that particular time step.
Quality of Fit
a measure of the quality of the simulation at each point in time. The dynamical time
step tends to be very small in regions where the system is encountering errors and
when the time evolution is relatively error free it will increase as in Fig. 4.22.
Furthermore, we can make use of the fact that the Ehrenfest theorem should
hold, as this is a quantum system, so we can take the average of observables such
as position and momentum and compare their values to those expected in a purely
classical system, according to eq. 2.8. At each time step we compute the average
position < x > of the trajectories we are following, and compare them to those of the
classical trajectories arising from the mean initial values.
Chapter 5
Validation, Test Cases
To check the validity of the trajectories and to test the performance of the various
interpolation methods we start by choosing a couple of (the very few) systems for
which an analytical solution is known. We then apply the same tests to a propagating
problem with interference similar to that of Sec. 4.3, that is known to have long time
stability issues in Bohmian calculations.
70
motion of Sec. 3.4.3 as opposed to the Newtonian code of 3.4.2, results in both faster
execution and more stable numerics. The reason for that is probably linked with the
higher order of numerical differentiations that the usage of P and the algorithms of
Sec. 3.4.2 carry and the fact that \7 Q is not calculated directly but instead Q is
combined with the kinetic energy and the classical potential in ~~.
Therefore in the following calculations, intended to compare interpolation rou
tines, we will be using the QHJE code, modified in the following way:
So that comparisons are easier to make we use a fixed time step (dt
5 x 10- 3 )
for all trials. This fixed time step, which we get by turning off the dynamic time
step machinery, will cause trajectory crossings to be more frequent than would nor
mally happen, but the trajectory crossings will still depend on the quality of the
interpolation methods used.
When the run is successful we compare the end wavepacket with the one obtained
using the split operator method (see sec. 3.3). If on the other hand error conditions
do occur a note is taken of the time step when that happened, giving us a rough
measure of which interpolation used is more susceptible to numerical instabilities.
5.1.1
Table Description
A brief explanation of the parameters seen in the following tables 5.1, 5.2 and 5.3
follow.
The column entitled CPU(s) is self explanatory and it is simply the time in seconds
that it takes to run the code. Note that unlike the Moyer code, no optimization was
attempted on the Bohmian code and visualization and debugging code is run inside
the main calculating loop. For our purposes of comparing interpolation methods this
is not an issue though since they all run the same unoptimized code.
72
Tstop
reached our predetermined maximum number of time steps or because an error con
clition such as trajectory crossing is encountered, that information is included in the
next column "Traj".
In the next two columns estimates of the error of the final wavefunction are given
as compared to the analytic solution of the same problem or to the Moyer solution
(it was not necessary to use a numerical solution for the first two tables as analytic
solutions exist for those potentials but we always compute the Moyer solution in all
cases).
The error is the difference between our solution and the wavebased solution in the
spatial domain common to both. In< (P-Pm) >we have the error in the density while
in < (C - Cm) >the error in the log density ( x ~) is represented, these are presented
as an average value the standard deviation
CJP
J(P-;t_"i_)
or
CJc
J(C;~rr_;_)
2
Finally the last column gives us an estimate of the relative speed of all the methods
in time steps per second of CPU time.
The Moyer grid method with a parameter 2048 x 2000 means that it was run on
a 2048 point space grid for 2000 time steps .
P is the number of trajectories or particles used and in the polynomial method D
stands for the polynomial degree used. At each point we find the D nearest neighbours
and fit a polynomial to them to get estimates of the derivatives (for instance if D=2
we get the usual central difference formula for the curvature).
Part of the least square method is an extra parameter N, the number of nearest
neighbours considered at each point. In the case N>D+l, the higher theN the more
averaging occurs in the least squares estimate.
In the sines method, the LPFilter parameter represents the percentage of low pass
-8
-6
-4
-2
J~ . . :~-10
-8
-6
-4
-2
..... : . :-J
10
:::l
10
5.1.2
This is the simplest case where there is no external potential and only one initial
Gaussian packet exists. No interference effects are expected.
The free Gaussian wavepacket evolves ([15]) in time according to :
'lf(x, t)
St
ao(1
+ i 2n!aJ
All interpolation methods deal quite well with this very simplest of cases of a single
decaying Gaussian. The one exception is when we increase the number of points in
the polynomial interpolation to 32; there the simulation doesn't quite make it to 500
timesteps and there is a trajectory crossing at 469. This is probably due to the high
order of polynomial used causing oscillations as pictured in the Runge phenomenon
in Fig. 3.7.
74
Description/Parameters
Moyer grid 2048x2000
Poly P=50 D=2
Poly P=50 D=5
Poly P=50 D=6
Poly P=50 D=8
Poly P=50 D=16
Poly P=50 D=32
LeastSq P=50 D=2 N=6
LeastSq P=50 D=6 N=8
Smes P=50 LPFllter=10%
Gaussians P=50
Splines P=50
I
I SmoothSplines P=50 B=12
I
I
142
145
161
85
139
16.4
23.3
16.8
21
500
500
500
500
500
469
500
500
500
500
500
500
Stable
Stable
Stable
Stable
Stable
Cross
Stable
Stable
I Stable I
I Stable I
I Stable I
I Stable I
-1.54 2.3
-1.54 2.3
-1.54 2.3
-1.54 2.3
-1.54 2.3
N/A
-1.54 2.3
-1.54 2.3
-1.54 2.3
-1.54 2.3
-1.54 2.3
-1.54 2.3
I steps/s I
I 40.8 I
6.02
3.65
3.55
3.52
3.45
3.11
5.88
3.60
30.5
21.5
30.5
23.8
methods are much slower than the rest. In the polynomial or least squares methods,
the routine has to consider each point, determine a local neighbourhood of size N, and
then construct an approximation in that neighbourhood. In other methods (such as
splines) we first get a global approximation to the log density and then we get updated
values to every trajectory at the same time. In other words we do operations to the
whole vector of positions instead of each point separately, with performance gains.
This is particularly true as the number of trajectories increases.
5.1.3
Harmonic Oscillator
w- 2 (x)2.
12
Fig. 5.2 looks somewhat different from Fig. 4.23 because we are using an uniform
distribution of initial points as opposed to distributed as
Description/Parameters I CPU(s)
Moyer grid 2048x2000
I 49
Poly P=50 D=2
173
170
Poly P=50 D=3
Poly P=50 D=6
295
Poly P=50 D=10
295
164
LeastSq P=50 D=2 N=6
277
LeastSq P=50 D=6 N=8
I Tstap
I 2ooo
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
10000
Traj
NI A
Stable
Stable
Stable
Stable
Stable
Stable
I Stable
I Stable
I Stable
Stable
Stable
1 in Fig. 4.23.
I
I
I
0
-9.73 539
-9.73 539
-9.73 539
-9.73 539
-9.73 539
290 5300
5400 15200
-13.7 41
-1990 4610
-9.73 539
-182 820
I 40.8 I
5.78
5.88
3.39
3.39
6.10
3.61
32.2
22.2
30.5
24.5
27.7
76
5.1.4
Here we apply the same tests to a system that will develop interference patterns
similar to those that we discussed in Sec. 4.3. We have two Gaussians that are placed
side by side and let them interfere together with no external potential applied. The
evolution of the system is represented in Fig. 5.3 which was taken from a run of the
smoothed splines method (last item on table 5.3).
Description/Parameters
Moyer grid=2048x2000
Poly P=50 D=2
Poly P=50 D=3
Poly P=50 D=4
Poly P=50 D=5
Poly P=50 D=6
Poly P=50 D=10
Least Sq P=50 D=2 N=4
Least Sq P=50 D=2 N=5
Least Sq P=50 D=2 N=6
/ Sines P=50 LPFilter=10%
Gaussians P=50
Splines P=50
SmoothedSplines B=12
SmoothedSplines B=12
I CPU(s) I Tstop
I
49
132
135
212
211
NjA
NjA
70
108
110
17.5
NjA
NjA
75
392
2ooo
862
840
814
737
736
652
416
640
686
472
332
629
2000
10000
Traj
N/A
Cross
Cross
Cross
Cross
Cross
Cross
Cross
Cross
Cross
Cross
Cross
Cross
Stable
Stable
6.5
6.2
3.8
3.5
N/A
N/A
5.9
5.9
6.2
27
N/A
N/A
26.7
25.5
We can see from table 5.3 that most methods lead to trajectory crossings. We
remind the reader that we have turned off the dynamic time scale control mechanism.
The objective here is not to avoid trajectory crossings, but to evaluate different in
terpolation methods, and so by looking at table 5.3, and specifically at which time
step trajectory crossings were observed, we can say that a certain method is more
stable than another. Of course, the proper stable choice here should be the smoothed
Figure 5.3: Trajectories x(t) for smoothed spline calculation, 10,000 timesteps
splines method as it is the only one that doesn't result in crossed trajectories.
parabolas at each point, (resulting in the familiar central differences formula) doesn't
do too badly compared to other methods and outperforms the least square method
and even the basic spline method (crosses at 862 vs 629).
78
5.2
~t
As we saw in Sec. 4.3.2 and Figs. 4.24,4.25, by the time we notice that there is
something wrong with the C or the S fields, usually the root cause of that problem
already occured a few time steps previous, and a typical signature is high frequency
oscillations in '\1 4 8.
Oscillations suggest that we are integrating some trajectories too far in the direc
tion of other particles, resulting in an excessive repulsive force and oscillations down
the line. Why this tends to happen in problem areas of compression is linked to the
fact that the absolute error of a derivative goes approximately as
fx
of the underlying field, so as particles are packed closer together and t5x becomes
smaller, so grows the error in the gradients estimated in that area. In fact for '\1 2 the
error will be of order 4/ (8x )2; therefore it is imperative, when using non-averaging
(such as least square) interpolations, that we reduce the time intervals in lockstep
with a minimum inter-particle distance to minimize integration errors .
Let's take a look at the QHJE equations of motion:
VS
1
2
--(VS)
+-21("\! 2 C + (VC) 2 ) - V
2
_!v 2 s
2
where X is the position, S the phase and C the log-density. For a moment let's ignore
any errors in the X field and concentrate in the fields S s and C c with absolute
get an error of order 2sj8x, for \7 2 8 the error is 4s/(8x)2 and for (V'S) we get
If now we consider S 0
noiseless fields, using the expressions for the errors in the gradients we have the error
estimates:
s1
so + dt x
c1
co dt x
~ ( 8~ 2 so)
For small 8x, the 8x 2 term will dominate in the first equation and we can further
simplify:
sl
,....,_
,....,_
2dt
so (8x) 2co
cl
,....,_
,....,_
2dt
co (8x) 2so
80
OrWl"th
an=
2dtn-1
Sn
Sn-1
(~
Cn
Cn-1
(~
VXn-1
)2 Cn-1
2dtn-1
UXn-1
)2 Sn-1
2dtn-1
~,
oXn-1
So in general
Now an
2oXn
..d t2n , so we want an as small as possible which implies an alternative
81
Where t5xmin is the minimum inter-particle distance. This will mean that some
times the whole simulation will have to slow down by virtue of a small dt in order
to integrate accurately zones of compression. If we are using a local interpolation
method, this could probably be relaxed to integrating only the neighbourhood of
small t5x with a reduced timescale, in a sort of adiabatic approximation where the
rest of the field would be "frozen", but we have not yet explored that possibility.
5.3
82
successfully with the interference of Fig. 5.3. Also, because the smoothed splines are
calculated as a global approximation to all points, it is relatively fast compared to
other interpolation methods, that require a local calculation in the neighbourhood of
each point.
splitting, etc.
For completeness sake we should mention that besides variations on resampling
and or interpolation methods, there are some very different numerical takes on the
Bohmian interpretation that we did not explore. One such case is the derivative
propagation method (see (32]). Another one is based on the Wigner function formu
lation (see [57]) . The arbitrary Lagrangian-Eulerian method of Wyatt and coworkers
([8]) is a way to deal with insufficient data points in zones of interest for the fields,
conveniently letting us specify the grid that we wish to work with at each time step.
Finally as a way to deal with the problematic nodal zones, we mention the covering
function method of Babyuk and Wyatt ([33]), where the wavefunction is split into
two nodeless components (to be independently evolved), and in the work of Poirier
and Trahan ([34, 35, 36]) where the wavefunction is represented by two counter
propagating traveling waves.
Chapter 6
2D and Higher Dimensions
Generalization of Bohmian methods to two and higher dimensions should be rel
atively straightforward. Unfortunately the same cannot be said of the wave based
methods. When it comes to boundary conditions it is not so straightforward and even
when it comes to integrating techniques they can look quite different in lD, 2D and
higher dimensions. In fact we do not know of an equivalent transparent boundary
condition scheme in 2D to the one we used in Sec. 3.3.2 for lD. Instead we revert
to the imperfect technique of complex absorbing potentials as an approximation to
TBCs.
6.1
83
84
6.1.1
ADI Method
To get the time evolution of the wavefunction we will use the ADI method
(Alternating-Direction Implicit see [56]). This method is second order accurate both
in space and time and it is an efficient way of solving a parabolic diffusion equation
such as the Schrodinger equation.
For a general parabolic partial differential equation (assuming D to be constant):
t+~
'1/Jx,y
/,t+1 _
'f/x,y -
8t
t+~
t+~
t+~
t
'1/Jx,y)
(fi.l)
/,t+1)
'f/x,y
The name ADI comes from the fact that we split the time evolution into two half
steps. In the first we mix a forward and a backwards Euler scheme for the x and y
directions and in the second half step those roles are reversed, thus earning the name
alternating direction.
6.1.2
Boundary Conditions
85
0035
003
0025
002
0015
001
0005
0
100
100
shown in Fig.6.1) with different boundary conditions: Hard wall (Fig. 6.2), periodic
(Fig. 6.3) and complex absorbing potential (Figs. 6.5 6.4). The wavepacket is given
some initial momentum in the y direction so that after a few time steps, interaction
with the boundary can be more easily be seen.
Obviously the ideal result would be the analytical one where the wavepacket re
tains its Gaussian shape for all time and merely becomes more diffuse (larger (j)
with time. While that does not happen for any of the pseudo transparent boundary
condition methods tried, some do better than others in minimizing reflections and
retaining the Gaussian shape, as we shall see.
86
100
_j___.. L
t- 1
100
(b) t = 800
87
-1
-2
.
3
5
100
100
6.2
Recall that to calculate the trajectories in Sec. 3.3.3 we had to unwrap the lD
phase of the wavefunction. In 2D we have the same problem, except that now we
have to monitor for jumps of 21r in any direction.
In Fig. 6.6 we display a simple case where the phase is increasing mainly in one
88
100
(b) t = 800
100
80
60
40
zo
89
.
150
.. ..
.......... .
100
..
511
100
100
100
100
(a) original
Figure 6.7: A practical case of the phase problem and its solution
direction a nd in Fig. 6. 7 a more realistic case is shown, as well as its resolution.
Bohmian equations of motion remain the same. The only adjustments that we have
to make are to account for the fact that we are now dealing with position and mo
mentum vectors with N= 2 coordinates instead of just scalars. Most interpolation
techniques translate to 2-D and higher dimensions relatively straightforwardly.
90
Chapter 7
Cellular Automata and Lattice
Boltzmann Methods
In this chapter we explore a possible connection between simulation of Bohmian
mechanics and the world of cellular automata, of which lattice Boltzmann methods
are a subset.
7.1
Cellular Automata
Since they were invented by von Neumann is the 1940s, cellular automata have
been successfully used to simulate an enormous range of systems, including bacterial
growth, forest wildfires, rush hour traffic in a busy city and classical fluid dynamics
[39, 40). These cellular automata operate according to a tantalizingly simple set of
rules (that is supposed to capture the microphysics of the process), from which the
macroscopically complex behaviour of the system gradually emerges. It is conceivable
that by using a proper set of rules we may be able to reproduce the behaviour of a
quantum fluid in the Bohmian sense, thus approaching the problem from a completely
91
different viewpoint .
Recently re-popularized by Steven Wolfram [40], Cellular Automata or CA have
been with us for decades. As we mentioned they are particularly well suited to model
complex and chaotic systems as well as more "classical" problems. This versatility
should not surprise us since they have been demonstrated [42] to be formally equiv
alent to a Thring machine, so theoretically they are capable of carrying out any well
posed calculation. To emphasize this point we should note that the calculations of
previous chapters were done on Thring machines (standard computers).
The question remains, are there are advantages to using cellular automata in the
context of time evolution of a quantum system, as opposed to more conventional
techniques , and in particular can we marry the framework of Bohmian mechanics to
the gear of cellular automata? That is what we explore in this section.
7.1.1
There are many and interesting cellular automata implementations that have been
studied ([40]). The reader is probably already familiar with at least one of these. In
Conway's "Game of life", a simple set of rules gives rise, totally det erministically, to
quite complex behaviour of its constituents, mimicking organisms that live, die and
even self replicate on the computer screen. Incidentally a Thring machine has been
constructed with Conway's game of life "parts"; it is described in [43].
7.2
93
7.2.1
Historically, these models are the original catalysts for the usage of cellular au
tomata in fluid simulation. Originally introduced in the 1970's by Hardy, Pomeaux
and de Pazzis, these simulations consist of an underlying lattice on which two basic
steps are performed to advance in time :
A free streaming step, where particles move with whatever momentum has been
assigned to them.
A collision step, where particles collide with each other in momentum and par
ticle number conserving exchanges.
Finally if at some point in time we wish to recover the physically relevant macroscopic
fields, an averaging over many lattice sites is done.
Amazingly, together with a judicious choice of underlying lattice, this is all that
it takes to make a respectable simulation of a simple fluid. Following are some his
torically famous choices for the lattice and the corresponding collision rules.
"HPP" LCGA
This is the original model by Hardy, Pomeaux and de Pazzis (HPP). Here, the
chosen lattice is square, and the simple collision rules are illustrated in Fig. 7.1. This
is mostly a toy model and the simple choice of lattice ends up having consequences in
terms of the resultant flow not being isotropic. In fact this lattice gives rise to fluid
vortices which are square instead of circular.
Incoroming
- H - =>
Outgoing
1
T
Figure 7.1 : Lattice and collision rules for HPP model
"FHP" LGCA
Later introduced by Frish, Hasslacher and Poumeaux, this CA does not suffer from
the anisotropic anomaly of the HPP model. This is thanks to using an hexagonal
lattice and a correspondingly larger set of collision rules (see Fig. 7.2).
95
a) _____.
t--
I
\
b) _____.
=>
=>
d)
-----.0
=>
e)
_____.0t--
=>
!)
_____..-
g)
=>
OR\
=>
c)
G------.
<
I \
e6
OR
A r
OR
I =>~OR~
1\
conserving
0
(a) Momentum
collision rules
es
e1
ez
e!\e,
Other Lattices
As you move to higher dimensions, the complexity of the lattices necessary to
preserve isotropy quickly escalates and the number of collision rules that we need to
track, rises exponentially.
7.3
One of the disadvantages of the previously mentioned LGCA methods is their high
statistical noise, a consequence of their discrete nature whereby a lattice direction
either is or is not occupied. A natural evolution from these ideas is presented in
the Lattice Boltzmann method where one replaces the Boolean particle number by
its average value resulting in a continuous density distribution function f(f, iJ, t),
defined in phase space . The evolution off is governed by the Boltzmann equation
(7.1)
Here
Vx
Fx
Aj(fj - Jr) constitutes the quasi-linear LBE method. Going one step further we
get the fully linear approximation, the BGK (Bhatnagar,Gross, Krook [58]) collision
operator:D = ~(fo- f).
7.4
Now that we have established a link between cellular automata, lattice Boltz
mann and fluid mechanics, we use Succi ([50]) as a guide to illustrate the link in one
dimension between the Dirac equation and lattice Boltzmann methods.
97
7.4.1
Dirac Equation
with
CJx,y,z
ax,y,z =
[
matrices:
CJx,y,z =
7.4.2
0 1) ' (
1 0
'/,
-i ) ' ( 1
0 )
0 -1
Majorana Representation
is represented by
(7.2)
here J.L
0, 1, 2, 3 and Wlk
r51k ,W1
= ax,
W2
= -imaY
(3 , W 3 =
+ qV61k
-az
(all real 4 by 4
and an external potential term. This form is reminiscent of a 4-phase fluid (the two
Dirac spinors each with their 2 components) interacting via a scattering matrix
Mjk
d!
top and bottom Dirac spinors with components u 1 = ur, u 2
u!, d 1
dr, d2 = d!,
(7.3)
7.4.3
f 1e = ~
h
9
, f ?. =
/i
d, u , vi = =F1 ,
That is to say that the quantum system is formally equivalent to a 1-D Lattice
Boltzmann system with four channels (up and down spinors) that will intermix be
cause
7.5
7.5.1
!I
depends on
Numerical Experiments
Algorithm
n = c = 1.
Where the
hat ~
i~(u2,1 + u2,1)
(7.4)
implicit scheme, as can be seen by the presence of advanced terms on both sides of
99
the equation. To make it explicit and refer only to known values, we solve the linear
system for
a=
{ b-
~1-0/4
1+0/4-ig
m
1+0/4-ig
This convenient linear system will be the basis of our lattice propagation and
collision calculations. Note that if desired we can easily include non-linear and/or
self-consistent terms by suitably changing the potential term g (i.e. if we want it to
be a function of the density we can make it a function of (u 2 + d2 )). In that case the
coefficients a and b will no longer be constant and will have to be recomputed at each
time step.
7.5.2
To test the algorithm we are going to use similar systems to those of Sec. 5.1,
whose analytical solutions are well known.
Free Packet
We start with a Gaussian packet centered in the middle of our array and given a
certain momentum p towards the right. The fact that the Gaussian disperses in time
~~--~~~
800
1000
1200
1400
200
400
600
800
1000
1200
1400
1600
1800
2000
Spe.te
is clearly visible in Fig. 7.4. How that dispersion relates to the expected analytical
value is represented in Fig. 7.5. Also represented in the same figure is the average
value for the position of the packet, which follows a path consistent with Ehrenfest's
theorem. The expressions for both expection values are:
Po
<X>= Xo + -t
m
and
a(t)
Looking carefully at the graphs of Fig. 7.5 we note that as the wavepacket ap
proaches the boundary our values deviate further from the theoretical values, as we
start to observe some numerical reflection from the walls. This is particularly visible
in the right panel of Fig. 7.4.
101
1120
180
1\00
160
1060
100
80
1040
1020'-----'----'--'---___.,._
0
200
400
600
800
__,____,,--_.___
1000
1200
1400
Time
_.___,___J
1600
1800
200
2000
400
600
800
1000
Time
1200
1400
1600
1800
2000
Figure 7.5: Mean position (left) and standard deviation (right) of the wavepacket vs
time
Wa.vefuncllon EvolutiOn
1300
500
1200
1000
1\00
~ 1500
1000
2000
900
2500
600
3000
200
400
600
800
1000
Spooe
1400
1600
1600
2000
7000
500
1000
1500
Time
2000
2500
Figure 7.6: Wavepacket in an quadratic potential (left) and its average position (right)
Harmonic Oscillator
In the next case we begin with a Gaussian packet with zero initial momentum
placed in an harmonic oscillator potential. In Fig. 7.6 we see when plotting the mean
position of the packet that there is some numerical dissipation present as the position
is slightly smaller after one period.
Barrier Penetration
Finally in the last case, instead of having zero potential everywhere, we place a
20 unit long barrier at position 1300. In Fig. 7.8 intense interference can be seen at
the edge of the "wall" and a weak evanescent wave continues along the interrupted
path of the Gaussian packet, across the barrier to the left side of Fig. 7.7. Here we
don't face the same problems with nodes or interference as in the Bohmian methods
we considered before. The reason for this is that we are dealing more with a wave
based method closer in its internals to our reference Moyer method of Sec. 3.3.
(a) t=800
103
(b) t=llOO
Figure 7.8: Interference and exponential decay inside the potential barrier
7.6
As we have seen the Majorama Lattice Boltzmann method seems to work, however
as we just saw it really should belong to the class of wavefunction based methods
such as the split operator method we used previously in 3.3. We still have hard
domain walls and a fixed space-time grid, with all the associated limitations. What
this method does do however, is give us a glimpse of a connection between cellular
automata and quantum simulation, and in doing so it gives us a hint that an equivalent
Bohmian based cellular automaton may also be devisable.
7.7
~~ + \7 (pv) =
0 {::}
~~
-~\l(V + Q)
+ (v\l)v =
~ + p \7 v =
=
~~
And then looking at the equation for a classical fluid (Euler equation):
(7.5)
avi
at
+ (vV)vt
1
m
= --811;t
1
mp
-86
J t]P
(7.6)
We see that the first equation 7.5 expresses the conservation of probability, and the
second maps term by term onto the classical fluid equation 7.6 except for the pressure
term that must map to Q for the analogy to hold. In fact ([15]), if we substitute the
pressure tensor [{1 by O"ij
= -
7. 7.1
At each lattice site, these N discrete directions combine in the following way to
reconstitute the real macroscopic fields:
105
(7.7)
mi
instance, in the D2Q9 lattice, a two dimensional lattice with 9 velocity vectors, the
diagonal terms will have to be weighted differently since they have different lengths.
Now fo can be quite complicated, but we are going to consider a Taylor expan
sion in the macroscopic fields v and p, thus establishing a connection between the
macroscopic equilibrium and the collisional microphysics.
0
fi
= ap
fii.il
+ bp+ CPViaVtf3Ua Uf3 + ....
v2
Here a, b and c are coefficients to be determined; p and v are the macroscopic fields,
and ui are the lattice vectors.
We can now insert this Taylor expanded form of the Boltzmann equilibrium func
tion in the equations of eq. 7. 7. The first equation implies that in a "collision" or
"step"
L mdP =
fP
7.7.2
In general, the Lattice Boltzmann method works for problems that can be set
up in the form equations of continuity which are the result of one or more conserva
tion laws([53]). For instance for some hypothetical macroscopic fields Y and Z, the
conservation relations for the zeroth and first moments,
(7.8)
together with the space-time evolution given by the Boltzmann equation f(r+f:lte"a,, t+
f:lt)
JP + (1 -
~)
JP,
nuity):
fJf -
\7 (pu)
(7.9)
p and
these may now used to determine coefficients in our particular form for t he function
f~, as shown in Sec. 7.7.1.
Bohmian Case
We need to work with the Bohmian equations of motion, in an Eulerian frame
since the LBM lattice is fixed in space:
fff+ \lpu=O
~~
+ u\lu = - ~ \l(V + Q)
Q=
(7.10)
_.!f_ \lyP
2m yP
We already saw in the previous section how we can generate t he first of these
equations, the probability conservation relation by setting X
1, Y
= p
and Z
= pu.
8u
-8t + u\lu =
La f~ =
e~
107
= -
2~ "j/
--\l(V + Q)
M
(7.11)
equation coincides with the Bohmian momentum equation . This is a little bit tricky
as we shall see. Looking at the equations 7.11 we have set Y = pu, so we want a
term in ~ =
P&u
&t
!!1!:'
!:.
&t
u!fjf + p~~.
+ u\lpu =
!!1!:'
!:.
&t
+ \lupu- pu\lu
!f/i+Vpu=O
~
8pu
ot
+ \lupu = - M \l(V + Q)
= - !_ "v'P
2m
,;p
+ Q)
(or an
~+\lpu=O
!i... '\l.,ft
2m ..;P
Thus our Z should be equal to upu+if(V +Q) , giving the complete set of conservation
relations for our model:
~ccfg = P
~a fg eAa = pu
(7.12)
(7.13)
will set some constraints on the coefficients in eq. 7.13 via the same procedure of Sec.
7.7.1. The resultant behaviour of this lattice Boltzmann system should be a close
analog of the Bohmian equations:
7.8. COMMENTS
109
Wf + V pu =
~
0 + 0 (E2 )
0 + ER2
+ 0(E 2 )
= _.!f._ '\ly!p
2m yip
with an extra term ([59]) R2, a viscous term that can be tuned via
(T-
8 n;~)
2 ) ( 8t
8x
0
1
1
88t P,~
2 ) 'th
8xk Wl
2
0 _
7Tij -
"'
LJa
T,
since R2 =
= upu
+ fi(V + Q) in eqs.
7.12, we are going to end up with Q intact in the Boltzmann distribution function
f.
That is to say that, unlike in the standard lattice Boltzmann method, where we have
constant coefficients in
7.8
Comments
First it should be noted that the classic Lattice Boltzmann method already has a
significant overhead on its own (since it deals with N velocity vectors at each site),
this hindrance is usually compensated by the ability to tune the microphysics to deal
with specialized problems, complex boundaries, phase mixtures, etc.
Secondly, our Bohmian lattice Boltzmann method is set in an Euler frame (see
eq. 7.10), so it does not benefit from the speed increase yielded by sparse grids of the
Lagrangian implementations of previous chapters.
Finally, to my knowledge the only other way to include the quantum term in the
Lattice Boltzmann theory is in the form of an external applied force. This means that
the quantum force term still has to be calculated at each lattice point and does not
"emerge" as some sort of collective behaviour (as say viscosity does in classic LBM).
Given the non-local form of Q, this is a very expensive computational operation to
Chapter 8
Conclusion
"I think I can safely say that nobody understands quantum mechanics"
Richard Feynman [55]
111
112
CHAPTER 8. CONCLUSION
Bibliography
[1] Daniel Styer et al. "Nine Formulations of Quantum Mechanics", Am. J. Phys. 70
(2002) 288-97.
[2] L. de Broglie, "Recherches sur la theorie des quanta" Doctoral thesis, Universite
de Paris (1925)
[3] L. de Broglie, Ann. de Phys., Paris 3 (1925) 22.
[4] E. Madelung, "Quantentheorie in hydrodynamischer Form", Zeit. Phys. 40 (1926)
322-6.
[5] Bacciagaluppi, Valentini: "Quantum Theory at the Crossroads: Reconsidering the
1927 Solvay Conference", quant-ph/0609184, (2006)
[6] David Bohm, "A suggested interpretation of the quantum theory in terms of "hid
den" variables", Phys. Rev. 85 (1952) 1:166-79; II: 180-93
[7] D. Diirr, S. Goldstein and N. Zanghi, "David Joseph Bohm: 1917-1992", Founda
tions of Physics Letters, 6 (1993) 551-4; alsoP. Holland and J-P Vigier, Founda
tions of Physics 23 (1993) 5-6.
[8] R. E. Wyatt, "Quantum Dynamics with Trajectories", Springer Science, (2006)
[9] Hua Wu and D.W.L. Sprung, Phys. Lett. A 183 (1993) 413-7.
113
114
BIBLIOGRAPHY
[10] C.L. Lopreore and R.E. Wyatt, "Quantum Wave packet dynamics with trajecto
ries" , Phys. Rev. Lett. 82 (1999) 5190-4.
[11] F. Sales Mayor, A. Askar, and H.A Rabitz , "Quantum fluid dynamics in the La
grangian representation and applications to photodissociation problems", J. Chern.
Phys. 111 (1999) 2423-35
[12] A. Matzkin, " Are Bohmian trajectories real? On the dynamical mismatch be
tween de Broglie-Bohm and classical dynamics in semiclassical systems", quant
ph/ 0609172 ' (2006)
[13] W. Pauli, "Remarques sur le probleme des parametres caches dans la mecanique
quantique et sur la theorie de l'onde pilote" Andre George Ed. , "Louis de Broglie
Physicien et Penseur", Paris: Albin Michel
[14] W.
Myrvold,
national
Studies
the
Philosophy of Science,
Bohm's Theory",
17
(2003)
Inter
7-24,
see
BIBLIOGRAPHY
115
[19] C. Phillipidis, C. Dewdney and B.J. Hiley, Nuovo Cimento 52 B (1979) 15-28.
"Quantum interference and the quantum potential"
[20] C. Moyer, "Numerov extension of transparent boundary conditions for the
Schrodinger equation in one dimension.", Am. J. Phys 72 (2004) 351-360.
[21] W. van Dijk, F. M. Toyama, "Accurate numerical solutions of the time-dependent
Schrodinger equation" , physics/0701150v1 or Phys. Rev. E 75 (2007) 036707
[22] S. Garashuk, V. Rassolov, "Semiclassical Dynamics based on quantum trajecto
ries, Chern. Phys. Lett. 364 (2002) 562-7.
[23] http:/jphysics.nist.gov/cuu/Constants/
[24] E.J. Heller , J. Chern. Phys. 62 (1975) 1544-55.
[25] M. Ehrhardt, "Finite Difference Schemes on unbounded Domains" , World Sci
entific, 2005, pp. 343-384
[26] A. Arnold, M. Ehrhardt, I. Sofronov,"Discrete transparent boundary conditions
for the Schrodinger equation: Fast calculation, approximation and stability",
Comm. Math. Sci. , (2003)
[27] Zisowsky A, Arnold A, Ehrhardt M and T. Koprucki, "Discrete transparent
boundary conditions", Zeithschrift fiir angewandte Mathematik und Mechanik 58
(2005) 793-805.
[28] Gray S.K., Noid D.W. and Sumpter B.G., J. Chern. Phys. 101 (1994) 4062-72.
[29] J. Candy W. Rozmus, J. Comput. Phys. 92 (1991) 230-56.
[30] H. Yoshida, Celest. Mech. Dynam. Astron. 56 (1993) 27
BIBLIOGRAPHY
116
I stationary
BIBLIOGRAPHY
117
[42] K. Zuse, "Rechnender Raum", Friedrich Vieweg & Sohn, Braunschweig, (1969)
[43] A. Adamatzky et al, "Collision-Based Computing", Springer, (2002) , 491
[44] J Maddox and E. Bittner, J. Chern. Phys. 115 (2001), 6309-16
[45] B. Kendrick, J. Chern Phys. 119 (2003) 5805-17
[46] T. Usuki, M. Saito, M. Takatsu, R.A. Kiehl and N. Yokoyama, Phys. Rev. B 52
(1995) 8244.
[47] R. Akis and D.K. Ferry and J.P. Bird, "Numerical simulation of quantum dots",
BIBLIOGRAPHY
118
[55] R.P. Feynman "The Character of a Physical Law" , Cambridge Mass. (1965)
[56]
Press, (1999)
[57] A. Donoso, C.C. Martens," Quantum tunneling using entangled classical trajec
tories", Phys. Rev. Lett. 87 (2001) 223202
[58] P. Bhatnagar, E. Gross and M. Krook, "A model for collison processes in gases",
simulating flows with shock wave ", Phys. Rev. E 59 , 454- 459 (1999)
It is a fixed grid method whose core is the Cayley approximation to the system
propagator:
ll!(x, t
+ !:!.)
1- iH .6./2
~ 1 + iH .6./2 ll!(x, t)
This is rewritten with y(x, t) = ll!(x, t+l:!.) + ll!(x, t), a function of two consecutive
time steps as:
11
.2) y='t
.4
y
- ( V-'t-
.6.
,6.
uniform grid with spacing h. The Numerov scheme is used resulting in:
(8.1)
where d = 1- h 2 f2 and w = dy- h 2 --f-2 . We see that Eq. 8.1 is a recursion relation
119
BIBLIOGRAPHY
120
that depends on three consecutive points j -1, j, j
+ 1.
recursion relations that involve two new variables e and q defined by wi+ 1 = e1w1 +q1 :
To enforce the TBCs we define at the left boundary (j = 0), w1+1 = aw0
eowo
+ qo
+ {3
eo = a = ao
{ qo
J a5 -
l = exp(-in ) (P, ( ) _ P, _ ( ))
n
2n _ 1
n 1-L
n 2 1-L
..\- 2h2
t:,.
c = 1- i~
6d
a= 1 + h 2 ..9...
2d
ln
121
BIBLIOGRAPHY
The same procedure applied to the other border on the right yields,
w~+l =
J
h2 wn
w'r!
1 + _J
-wn + i3L).
-d.
d.
J
Listings
SIZE=512
%cells
XL=-40 ;XR=40;
%nanometers
Tl M ESTEPS=2000
eV=1.6022*1e-19 ;
hbar=1.0546*1e-34;
c_ light=2 .9979*1e8 ;
Melectron=0.51099*1e6;
MASS=Melectron*Meffective; %eV
MASSE=MASS ;
122
BIBLIOGRAPHY
X=XL:H:XR;
temp=X*O;
%% SYSTEM MASS AND POTENTIAL
Vj=temp; % Potential in eV
%Vj=Vj+1e3*exp(-112/.0125A 2*{X-. 75). A2);
%X=X*Xmsca le;
Vj=Vj+(X>-10)&{X<-5)+{X>5)&(X<10);
Vj=Vj*.25;
Vj=Vj*Energy _scale;
%Initial wavefunction
PsiO=temp;
wave_ sigma=sqrt{10);
X0=-25;
K=0.31;
PsiO=exp( -112lwave _sigma A2*{X-XO). A2). *exp(i*K*X);
%Normalize
%Psi0=Psi0*1lsqrt( norma liz(abs(PsiO. *PsiO),X) );
%SYSTEM DEFINITION ENDS
PSI=zeros{TIMESTEPS,SIZE);
PSI{1,:)=Psi0;
ALPHAj=temp;
%Fixed
Gj=Vj-2*iiDELTA;
Gj=Gj*{2*MASS);
Dj=1-H A2I12*Gj;
Aj=1+H A2*Gj.I{2*Dj);
j=J;
niuJ={1-abs(Aj(j) A2)) I abs{1-Aj(j) A2);
j=1;
niu1={1-abs(Aj(j) A2)) I abs{1-Aj(j) A2);
% If V1=V J then niu1=niuJ symmetrical
%Calculate all LEGENDRE coeficients (#timesteps) ahead of time index
% 1->n=O
%REPEAT 1 For X=1
j=1;
lambda=2*HA2IDELTA; %Fixed
c=1-i*lambdai{6*Dj(j)); %Fixed
phi=angle{{Aj(j). A2-1).1c); %Fixed
niu={1-abs(Aj{j) A2)) I abs{1-Aj(j) A2);
% PLEASE NOTE PN{n) is actually Legrendre polynomial of order n-1
% So use to refer to PN{m) m=n+1
PN=temp;LN=temp;
PN{1)=1;PN{2)=niu; % These are PNO and PN1
for m=2:{TIMESTEPS-1)
123
124
BIBLIOGRAPHY
%generate Legendre polynomials;
n=m-1;
PN ( m+ 1)=2*niu*PN ( m)-PN( m-1 )-( niu*PN( m)-PN ( m-1)) I(n+ 1);
end ;
LN(1)=niu*exp(-i*phi); % LN(O) would be -1; LNn is LN(n)
for n=2:(TIMESTEPS-1)
m=n+1 ;
LN( n)=exp( -i*n*phi)l(2*n-1 )*(PN( m)-PN( m-2) );
%LN ( n)=exp( -i*n*phi)*( niu*PN (m-1 )-PN ( m-2) )*(2*n+ 1) I (2*n-1) I(n+ 1);
end;
LN1=LN ;
%REPEAT 2 for X=J
j=J;
lambda=2*H A2IDELTA; %Fixed
c=1-i*lambdai(6*Dj(j)) ; %Fixed
phi=angle((Aj(j) . A2-1).1c); %Fixed
niu=(1-abs( Aj(j) A2)) I abs(1-Aj(j) A2);
% PLEASE NOTE PN(n) is actually Lagrage polynomial of order n-1
%So use to refer to PN(m) m=n+1
PN=temp;LN=temp;
PN(1)=1 ;PN(2)=niu ; % These are PNO and PN1
for m=2:(TIMESTEPS-1)
%generate Legendre polynomials;
n=m-1;
PN ( m+ 1)=2*niu*PN ( m)-PN( m-1 )-( niu*PN ( m)-PN ( m-1)) I(n+ 1);
end ;
LN(1)=niu*exp(-i*phi); % LN(O) would be -1 ; LNn is LN(n)
for n=2:(TIMESTEPS-1)
m=n+1 ;
LN( n)=exp( -i*n*phi)l(2*n-1)*(PN( m)-PN( m-2) );
%LN ( n)=exp( -i* n*phi)*( niu*PN (m-1 )-PN( m-2) )*(2*n+ 1) I (2*n-1) I (n+ 1);
end;
LNJ=LN ;
%###################################
%###################################
%
LOOP
%###################################
%###################################
for N_ INDEX=l:(TIMESTEPS-1)
N INDEX
%###################################
%STEP 1
%###################################
BIBLIOGRAPHY
Fj=Fj*2*MASS;
%###################################
%STEP 2
%###################################
% Calc Ej Fixed
t1=Aj(1)+sqrt(Aj(1) A2-1);
Ej(1)=ALPHAj(1);
%Recurrence
for j=2:SIZE
Ej(j)=2* Aj(j)-1/Ej(j-1 );
end
% Calc Qj Variable
j=1;
%convolution
SUM=O;
for k=1:N INDEX
n=N_INDEX-k+1;
SUM=SUM+PSI(k,j)*LN1(n);
end;
j=1;
Qj(j)=conj(Dj(j) )*( conj(Aj(j) )-ALPHAj(j) )*PSI(N _INDEX,j)+Dj(j)*(Aj(j)
ALPHAj(j) )*SUM;
%Recurrence
for j=2:SIZE
Qj(j)=Qj(j-1) /Ej(j-1 )+H A2*Fj(j) /Dj(j);
end
%###################################
%STEP 3
%###################################
tl=Aj(J)+sqrt(Aj(J) A2-1 );
j=J;
%convolution
SUM=O;
n=N_INDEX-k+1;
SUM=SUM+PSI(k,j)*LNJ(n);
end;
j=J;
125
126
BIBLIOGRAPHY
BETAj(j)=conj(Dj(j) )*( conj( Aj(j) )-ALPHAj(j) )*PSI(N _IN DEX,j)+Dj(j)*( Aj(j)
ALPHAj(j) )*SUM;
%Recurrence for Wj
for j=(J):-1 :2
end
%##############################
%STEP 4
%##############################
end;
figure
image( abs(PS1)*100);
Bohmian Code
Newtonian
%SIZE=512
%cells
XL=-40;XR=40;
nump=40;
porder=2;
maxgauss=1;
TOLERANCE=1e-5*nump;
SIZE=10000; %for psi,etc
TIMESTEPS=2000 %
DELTA=1;
DELTA=DELTA*fsec;
%timestep in femtoseconds
POTENTIAL='0*0.0061.*X.-4+50*cos(X)*0';
J=SIZE;
H=(XR-XL)/(SIZE-1);
%grid size
X=XL :H:XR;
temp=X*O;
%% SYSTEM MASS AND POTENTIAL
Vj=temp; % Potential in eV
%Vj=Vj+1e3*exp(-1/2/ . 0125-2*(X- . 75).-2);
%X=X*Xmscale;
%Vj=Vj+(X>-10)&(X<-5)+(X>5)&(X<10);
BIBLIOGRAPHY
127
%Vj=Vj*.25;
Vj=Vj+eval(POTENTIAL);
Vj=Vj*Energy_scale;
VV=Vj;
%Initial wavefunction
PsiO=temp;
wave_sigma=sqrt(10);
X0=20;
K=0.31;
%PsiO=exp(-1/2/wave_sigma-2*(X-X0).-2).*exp(i*K*X);
Psi0=exp(-1/2/wave_sigma-2*(X-X0).-2).*exp(i*K*X);
Psi1=exp(-1/2/wave_sigma-2*(X+X0).-2).*exp(i*K*X);
PsiO=Psi0+1/2*Psi1;
%PsiO=eval(PSIO);
%Normalize
%PsiO=Psi0*1/sqrt(normaliz(abs(PsiO.*PsiO),X));
% SYSTEM DEFINITION ENDS
PSI=zeros(TIMESTEPS,SIZE);
PSI(1, :)=PsiO;
rho=abs(Psi0).-2;
S=angle(PsiO);
%S=unwrap(S,[] ,2);
S=unwrap(S, [],1);
vel=diff(S);
%pO=log(abs(PsiO));
pO=abs(Psi0).-2;
cpO=cumtrapz(pO);
cpO=cpO/cpO(SIZE);
clear XX;
XX=ones(nump,TIMESTEPS)*NaN;
QQ=ones(nump,TIMESTEPS)*NaN;
PP=XX;
for i=1:nump
i;
BIBLIOGRAPHY
128
XX(i,1)=interpolate(X,cp0,prob1);
tmpi=getindex(cp0,1/(nump+1)*i);
XX(i,1)=X(tmpi); %dont use X
%
% use this instead
PD(i,1)=1/(nump+1)*i;
end;
QQ=XX;
PP=O*XX;
FF=XX*O;
%QF=XX*O;
%%%%%% LOOP
dt=DELTA;
VG=diff(VV);tmp=VG;VG=[tmp VG(SIZE-1)];
go_back1step=O;
tstep=2;
global FF;
while(tstep<=TIMESTEPS)
error=O;
while notvalid
%notvalid
%
go_back1step
%
tstep
if(go_back1step)if(tstep>2)tstep=tstep-1;go_back1step=O;end;end;
%
tstep
%error
%dt+9
error=O;
notvalid=O;
for i=1:nump
qO=QQ(i,tstep-1);
pO=PP(i,tstep-1);
[qf, pf, ff] = symplectic21(qO,pO,dt,X,QQ,PP,VG,i,nump,tstep,PD,coefs, ...
porder,myresidues);
BIBLIOGRAPHY
129
coefs,porder,myresidues);
[qh2, ph2, fh2] = symplectic21(qh1,ph1,dt/2,X,QQ,PP,VG,i,nump,tstep,PD, ...
coefs,porder,myresidues);
if(-isfinite(qh2)) fsdadsaerrrrrrr;end;
%
error=abs(ph2-pf); %momentum
error=abs(qh2-qf); %space
%error_lagre=abs(qf_2-qf); %space
if(error>TOLERANCE)
not_valid=1;dt=dt/2;
go_back1step=1;
end;
if((i>1)&(i<nump))
if(qh2<QQ(i-1,tstep-1))1(qh2>QQ(i+1,tstep-1))
qh2=1/2*(QQ(i-1,tstep-1)+QQ(i+1,tstep-1));
ph2=1/2*(PP(i-1,tstep-1)+PP(i+1,tstep-1));
%
cxzcxz
if(tstep>3)tstep=tstep-3;end
not_valid=1;dt=dt/2;
go_back1step=1;
%
%go back 1 step;
end;end;
end;
%endkiio
dt=dt* 1 . 002;
tstep=tstep+1;
130
BIBLIOGRAPHY
end;
subplot(3,1,1);
pp=QQ+j*PP;
plot(QQ',PP');xlabel('space');ylabel('momentum');
subplot(3,1,2);
% plot(PP','x-')
plot(QQ');xlabel('time');ylabel('space');
subplot(3,1,3);
plot(TT(1, : ));xlabel('step');ylabel('time delta');
z=cumsum(TT');
figure
plot(z,QQ');;ylabel('space');xlabel('linear time ');
grid
dt*TIMESTEPS
( sum(QQ(: ,TIMESTEPS)-=sort(QQ(:,TIMESTEPS))))/nump
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function
[coefs resiO resi1 resi2 resi3]=analyse_rho22(QQ,PD,time,porder, ...
maxgauss,fourier_residue)
% this routine will try to get an analytical expression for rho;
%Get quadratic coeficients ...
zx=QQ(:,time);zy=PD;
rho=gradient(zy,zx);
CC=1/2*log(rho);
[siza sizb]=size(rho);
siz=max(siza,sizb);
%[maxi,mini,num_max,num_min]=findmaxmin(CC,siz,10,9);
maxi=siz;mini(1)=1;mini(2)=siz;num_max=1;num_min=2;
ngauss=num_min-1;
if(ngauss>maxgauss)ngauss=maxgauss;end;
iend=mini(n+1);
istart=istart+1;
iend=iend-1;
BIBLIOGRAPHY
131
tempy=CC(istart:iend);
tempx=QQ(istart:iend,time);
coef(n,:)=polyfit(tempx,tempy,porder);
%SANITY CHECK
if(coef(n,1)>0)coef(n,:)=coef(n,:)*O;end;
residy=tempy-polyval(coef(n,:),tempx);
fresidy=dst(residy);
end;
specsize=max(max(size(residy)));
%resO=idst(residy); will be a check later
resO=zeros(1,specsize);
res1=zeros(1,specsize);
res2=zeros(1,specsize);
res3=zeros(1,specsize);
%should calculate this with idst ... but like this it is a good check
N=specsize;
for n=1:N
for k=1:N
if(k>N/10)fresidy(k)=O;end; % 20%
ss=sin(pi*k*n/(N+1));
cc=cos(pi*k*n/(N+1));
resO(n)=resO(n)+fresidy(k)*ss;
res1(n)=res1(n)+fresidy(k)*Cc*(pi*k/(N+1));
res2(n)=res2(n)+fresidy(k)*(-ss)*(pi*k/(N+1))*(pi*k/(N+1));;
res3(n)=res3(n)+fresidy(k)*(-cc)*(pi*k/(N+1))*(pi*k/(N+1))*(pi*k/(N+1));;
end;
end;
resO=res0*2/(N+1);
res1=res1*2/(N+1);
res2=res2*2/(N+1);
res3=res3*2/(N+1);
temperr=resO-residy';
err_in_idst=max(max(temperr));
%coef=polyfit(QQ(:,time),CC,porder);
%###############
figure(1);
hold off
subplot(2,1,1);
hold on
for n=1:ngauss
132
BIBLIOGRAPHY
istart=mini(n);
iend=mini(n+1);
istart=istart+1;
iend=iend - 1;
xstart=QQ(istart,time);
xend=QQ(iend,time);
vect=xstart:(xend-xstart)/200:xend;
drawnow
subplot(2,1,1);
plot(vect,polyval(coef(n,:),vect));
hold off
subplot(2,1,2);
plot(tempx,resO,'x-')
drawnow;
end;
hold off
%###############
figure(2)
plot(QQ(: , 1 :time)');
coef
%z
coefs=coef;
resiO=resO;
resi1=res1;resi2=res2;resi3=res3;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [q2, p2, f1 ] = symplectic21(qO,pO,dt,X,QQ,PP,VG,i,nump,tstep,PD,coefs, . . .
porder,myresidues)
%
%
qO=QQ(i,tstep-1);
pO=PP(i,tstep-1);
%get force term from potential
%
curi=getindex(X,qO);
%if(curi>O)
fO=-VG(curi);end;
%f0= - 2*q0;
%porder
fO=extforces21(qO,pO,QQ,PP,i,nump,tstep,PD,coefs,po r der,myresidues);
fO=fO+O;
p1=p0+dt/2*f0;
q1=qO+dt/2*p1;
BIBLIOGRAPHY
133
%f1=-2*q1;
f1=extforces21(q1,p1,QQ,PP,i,nump,tstep,PD,coefs,porder,myresidues);
f1=f1+0; %other forces
p2=p1+dt/2*f1;
q2=q1+dt/2*p2;
return
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
porder,myresidues)
temp=O;
resi0=myresidues(1,:);
resi1=myresidues(2,:);
resi2=myresidues(3,:);
resi3=myresidues(4,:);
%for n=1:nump
% if n==i continue;end;
% d=((qO-QQ(n,tstep-1)));
% u=sign(d);
%temp=temp+100*1/d~2*(u);
%end;
%DEPRECATED ################################### DERIVATIVES
%
%
%
%
QVector=QQ(:,tstep-1);
list=get_nearest_neigh_wself(QVector,i,5);
PD_DIFFS= maple_deriv(QQ(list),PD(list)');
PO=PD_DIFFS(1);P1=PD_DIFFS(2); P2=PD_DIFFS(3);P3=PD_DIFFS(4);P4=PD_DIFFS(5)
%
QF=-1/2*(P4*P1~2+P2~3-2*P2*P3*P1)/P1~3; %
%
temp=QF*0.001;
%DEPRECATED ###################################
% CD1=C1+2*C2*X+3*C3*X~2+ ....
% CD2=2*C2+3*2*C3*X+ ....
%coefs
%
CO=coefs(3);
coef(49) (porder 48)
coef(48)
%
C1=coefs(2);
%
C2=coefs(1);
coef(47)
coef=O;
BIBLIOGRAPHY
134
X=qO;
CD1=0;
for n=1:(porder+1-1);
coef=coefs(n);
m=porder-n+1;
CD1=CD1+coef*(m)*x-cm-1);
end;
CD2=0;
for n=1 : (porder+1-2);
coef=coefs(n);
m=porder-n+1;
CD2=CD2+coef*(m)*(m-1)*x-cm-2);
end ;
CD3=0;
if(porder>2)
for n=1:(porder+1-3);
coef=coefs(n);
m=porder-n+1;
CD3=CD3+coef*(m)*(m-1)*(m-2)*X-(m-3);
end;
end;
% %%%%%%%%%% QF=-(2*cd1*cd2+cd3_
QF=(2*CD1*CD2+CD3);
%
temp=0.1*QF;
% ###############################################
[ngauss temp]=size(coefs);
rhoO=O;rho1=0;rho2=0;rho3=0;
% FOURIER RESIDUES
resO=O;
res1=0;
res2=0;
res3=0;
if(i>1)&&(i<nump-1)
resO=resiO(i);
res1=resi1(i);
res2=resi2(i) ;
res3=resi3(i);
end;
n=1;
a2=coefs(n,1);
BIBLIOGRAPHY
135
a1=coefs(n,2);
aO=coefs(n,3);
%n
expo=exp(a2*q0-2+a1*qO+a0); %%BUG
rhoO=rhoO+expo*exp(resO);
rho1=rho1+expo*(2*a2*qO+a1+res1);
rho2=rho2+expo*(2*a2+res2+(2*a2*q0+a1+res1)-2);
rho3=rho3+expo*(res3+3*(2*a2+res2)*(2*a2*qO+a1+res1)+(2*a2*qO+a1+res1)-3);
end;
%rho0
%rhoO=rhoO+expo*resO;
%rho1=rho1+expo*res1;
%rho2=rho2+expo*res2;
%rho3=rho2+expo*res3;
temp=1/2*(rho1-3-2*rho1*rho2*rhoO+rho3*rho0-2)/rho0-3;
%checkc=(temp/2/alphan-2+miu)/q0;
% Harmonic osc
%curi=getindex(X,q1);
%f1=0;
%if(curi>O)
f1=-VG(curi);end;
%f1=-2*q1;
%
temp=0.1*QF;
%tempi=-.1*q0-0.005*p0;%-0.003*q0-3;;
xx=q0-5;
% tempi=+0.001*sech(xx)*tanh(xx)*1-0*qO-O.OOOO*q0-4;
tempi=-sech(xx)*tanh(xx)*i-O*qO-O.OOOO*q0-4;
ternpi=-sech(xx)*tanh(xx)*O-i*qO-O.OOOO*q0-4;
retemp=temp+tempi;
QHJE
tic
POTENTIAL='O*t+l*x.-2'
BIBLIOGRAPHY
136
ONEHALF=1/2*(1);
nump=50;porder=6; %linear
TDLERANCE=1e-5*nump;
TIMESTEPS=10000 %
dt=10--2*1;
order=2
points=order+1
%points=
clear XX;
XX=ones(nump,TIMESTEPS)*NaN;
QQ=ones(nump,TIMESTEPS)*NaN;
RH=XX;PP=XX;
VP=XX;
QQ=XX;
%PP=O*XX;
FF=XX*O;
%Q=XX*O;
DDSS=PP;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
X=-10*1:20*1/(nump-1) : 10*1;
clear XX;
XX=ones(nump,TIMESTEPS)*NaN;
QQ=ones(nump,TIMESTEPS)*NaN ;
RH=XX;PP=XX;
SS=XX;
VP=XX;
VP=VP*O;
for i=1:nump
QQ(i,1)=X(i);
RH(i,1)=log(rho(O,X(i)))/2;
SS(i,1)=Sphase(O,X(i));
PP(i,1)=vel(O,X(i));
VP(i,1)=Potential(O,X(i));
end
for i=1:TIMESTEPS
VP( : ,i)=VP(:,1); %time indep for now
end
BIBLIOGRAPHY
%initial phase fix
temp=SS(:,1);
temp=unwrap(temp,2);
SS(:,1)=temp;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%STARTHERE
tstep=1;
ctime=-dt; %not zero
while(tstep<=TIMESTEPS),
dt
notvalid=1;go_back1step=O;
while not valid,
if(go_back1step)
tstep=tstep-go_back1step;
tstep=max(tstep,1);
go_back1step=O;
end;
notvalid==O;
%{
%this is slow somehow
dt2==dt/2;
t==ctime;
for i==1:nump
x==QQ(i,tstep);
VP(i,tstep)=O;%eval(POTENTIAL); %time indep for now
end
plot(VP, '>-');
%%%%%%%%%%% 2 half steps
%}
%faster here
x=QQ(:, tstep);
%VP(:,tstep)=10*10~-3*(x-2).~2;
VP(:,tstep)==0*1*10~-4*(x-2).~2.*(x-0).~2;
%% POTENTIAL HERE
%VP(:,tstep)==10~1*sech(4*(x-3));
VP(:,tstep)=0*8*10~-2*(x).~2;
qqOO==QQ(:,tstep);
%{
if(tstep>1)
qp11=calcQP3(qqOO,RH(:,tstep),RH(:,tstep-1),QP(:,tstep-1));
else
qp11=calcQP(qqOO,RH(:,tstep));
end
[pp11 ddss11]==calcPPDDSS(qqOO,SS(:,tstep));%
137
138
BIBLIOGRAPHY
qq12=QQ( : ,tstep)+dt2*pp11;% only 1st order
%VP(:,tstep)=10*pp11.~2;%0*1*10~-4*(x-2).~2.*(x-0) . ~2;
%% POTENTIAL HERE
ss12=SS(:,tstep)+dt2*(0NEHALF*pp11.~2-(VP(:,tstep)+qp11));
[ptemp ddss12]=calcPPDDSS(qq12,ss12);
rh12=RH(: ,tstep)+dt2*(-1/2*ddss12);
%rh12=RH(: ,tstep)+dt2*(-1/2*ddss11);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
if (tstep>1)
qp12=calcQP3(qq12,rh12,RH(:,tstep),qp11);
else
qp12=calcQP(qq12,rh12);
end
[pp12 ddss12]=calcPPDDSS(qq12,ss12);%
ss22=ss12+dt2*(0NEHALF*pp12.~2-(VP(:,tstep)+qp12));
[pp22 ddss22]=calcPPDDSS(qq22,ss22);
rh22=rh12+dt2*(-1/2*ddss12);
%rh22=rh12+dt2*(-1/2*ddss11);
%}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%{
if(tstep>1)
qp11=calcQP3(qqOO,RH(:,tstep),RH(:,tstep-1),QP(: ,tstep-1));
else
qp11=calcQP(qqOO,RH(:,tstep),qqOO,qqOO);
end
% qp11=threshold_smooth(qqOO,qp11,1);
%qp11=fixborders2(qqOO,qp11,3,0,1);
[pp11 ddss11]=calcPPDDSS(qqOO,SS( : ,tstep)); %% v=grad(S)
%pp11=fixborders2(qqOO,pp11,3,0,1);
%ddss11=fixborders2(qqOO,ddss11,3,0,1);
% ddss11=threshold_smooth(qqOO,ddss11,10);
%no need for this block use from prev one
qq1f=QQ( : ,tstep)+dt*pp11;% only 1st order
ss1f=SS(:, t step)-dt*(DNEHALF*pp11.~2+(VP(: ,tstep)+qp11));
[ptemp ddss1f]=calcPPDDSS(qq1f,ss1f);
%
ddss1 f =fixborders2(qqOO,ddss1f,11,0,1);
% ddss1f=threshold_smooth(qq1f,ddss1f,10);
rh1f=RH(:,tstep)+dt*(-1/2*ddss1f);
%rh1f=RH( : , tstep)+dt*(-1/2*ddss11);
%}
ITER_HIGHTHRESH=16;
ITER_LOWTHRESH=9;
BIBLIOGRAPHY
THRESHOLD=10~-13;
XO=QQ(: ,tstep);SO=SS(:,tstep);CO=RH(:,tstep);
X1=XO;S1=SO;C1=CO;
% [DSO DDSO]=calcPPDDSS(X1,S1);
% [DCO DDCO]=calcPPDDSS(X1,C1);
% SO=SO+dt/2*(-(DS0.~2)+DDCO+DC0.~2);
% DDCO=DDCO*O;DCO=DCO*O;
% figure(3);plot(S1);
notvalid=1;
while notvalid,
iter=O;DDS=XO;DDC=XO;err=100;
while abs(err)>THRESHOLD,
iter=iter+1;
LASTDDS=DDS;
% [DS DDS]=calcPPDDSS_alternate(X1,S1);
% [DC DDC]=calcPPDDSS_alternate(X1,C1);
thres=10~-1o;
%
%
%
%
DDC=threshold_smooth2(X1,DDC,thres);
DDS=threshold_smooth2(X1,DDS,thres);
DC=threshold_smooth2(X1,DC,thres);
DS=threshold_smooth2(X1,DS,thres);
% DDC=DDC*O;DC=DC*O;
%
DDS=fixborders2(X1,DDS,2,0,1);
DS=fixborders2(X1,DS,2,0,1);
%
%
DDC=fixborders2(X1,DDC,2,0,1);
%
DC=fixborders2(X1,DC,2,0,1);
err=max(max(LASTDDS-DDS));
X1=XO+dt*DS;
S1=SO+dt/2*(-(DS.~2)+DDC+DC.~2);
C1=CO-dt/2*DDS;
% plot(XO,DDS);grid;drawnow
end
iter
if(iter>ITER_HIGHTHRESH)
dt=dt/2; notvalid=1;
else
notvalid=O;
end;
end;
%trapezoid semiimpl
139
140
BIBLIOGRAPHY
X1=XO+dt/2*(DS+DSO);
C1=CO-dt/4*(DDS+DDSO);
% Xl=XO+dt*DS;
% S1=SO+dt/2*(-(DS.-2)+DDC+DC.-2);
% C1=CO-dt/2*DDS;
%%%%%%%%%%%%%%%%%
%maxerror=max(abs(qq22-qq1f));
%if(maxerror>TOLERANCE)
% not_valid=l;dt=dt/2;
% go_back1step=1;
%end
qq1f=X1 ;
toe
fefrefre ;
end
end;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% QP(:,tstep)=qp12;
% PP(:,tstep)=pp12;
% DDSS( : ,tstep)=ddss12;
% QQ(:,tstep+1)=qq22;
% SS( : ,tstep+1)=ss22;
% DDSS(: ,tstep+1)=ddss22;
% RH(: ,tstep+1)=rh22;
BIBLIOGRAPHY
141
subplot(4,1,1);
plot(QQ(:,tstep),(RH(:,tstep)),'x-');
subplot(4,1,2);
plot(QQ(:,tstep),QP(:,tstep),'x-');
subplot(4,1,3);
plot(QQ(:,tstep),(SS(:,tstep)),'x-');
%p12
%plot(QQ(:,tstep),-1/2*2*ddss22,'x-');
[tempi temp2]=calcPPDDSS(QQ(:,tstep),QP(:,tstep));
plot(QQ(:,tstep),DS,'x-')
subplot(4,1,4);
plot(QQ(:,tstep),DDS,'x-')
%plot(QQ(:,tstep),temp2,'x-')
%if(tstep>2)plot(QQ(:,tstep),(QP(:,tstep)-QP(:,tstep-1))./dt,'x-');end
drawnow;
%plot(QQ(:,tstep),gradient(SS(:,tstep)),'x-');drawnow;
%pause(2)
deltat(tstep)=dt;
ctime=ctime+dt;
time(tstep)=ctime;
dt=dt;
ctime
tstep=tstep+l
end
toe
%}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
siz=max(size(X));
my_S=spapi(4,X,S);
spl=fnder(my_S,1);t1=fnval(sp1,X);
sp2=fnder(my_S,2);t2=fnval(sp2,X);
my_C=spapi(4,X,C);
sp1=fnder(my_C,1);t1=fnval(sp1,X);
sp2=fnder(my_C,2);t2=fnval(sp2,X);
zones=zeros(l,siz);
zones=(DDC0>10--8);
%enlarge by 1
142
BIBLIOGRAPHY
enl=4;
zones=smooth(zones,enl*2+1);%zones=zones>O;
[numz breaks]=getnumtypezones(zones);
% figure(2);plot(zones,'-x');drawnow;
starts=!;
for i=1:numz
% 0 for normal transition zones
% -1 for always splines
% 3 for always least sq
if(zones(starts)>O)
%splines or local
ends=breaks(i);
DC(starts:ends)=DCO(starts:ends);
DDC(starts:ends)=DDCO(starts:ends);
DS(starts:ends)=DSO(starts :ends);
DDS(starts:ends)=DDSO(starts : ends);
starts=ends+1;
else
ends=breaks(i);
%least sq or global
gx=X(starts:ends);gc=C(starts:ends);gs=S(starts:ends);
my_C=spap2(1,3,gx,gc);my_S=spap2(1,3,gx,gs);
sp1=fnder(my_C,1);t1=fnval(sp1,gx);
sp2=fnder(my_C,2);t2=fnval(sp2,gx);
DC(starts : ends)=t1;
DDC(starts:ends)=t2;
sp1=fnder(my_S,1);t1=fnval(sp1,gx);
sp2=fnder(my_S,2);t2=fnval(sp2,gx);
DS(starts:ends)=t1 ;
DDS(starts:ends)=t2;
starts=ends+1;
end ;
end;
DC=DC';DDC=DDC';DDS=DDS';DS=DS';
return;
% sp=spapi(4,x,yerr);
% ch = spapi(augknt(x,4,2), [x x], [yerr dy]);
% ls=spap2(18,3,x,yerr); %1 quadratic pieces=x-k+1;
% my_sp=spapi(4,X,S);% 3rd order spline (cubic)
my_S=spap2(1,3,X,S);
my_C=spap2(1,3,X,C);
%my_sp=csapi(X,S) ; %reg spline
% my_sp=csaps(X,S); %smoothed
sp1=fnder(my_S,1);t1=fnval(sp1,X);
BIBLIOGRAPHY
sp2=fnder(my_S,2);t2=fnval(sp2,X);
DS=t1;
DDS=t2;
sp1=fnder(my_C,1);t1=fnval(sp1,X);
sp2=fnder(my_C,2);t2=fnval(sp2,X);
DC=t1;
DDC=t2;
% t1=fixborders(X,t1,2,2); %linear
% t2=fixborders(X,t2,2,1);% quadratic
return
% %
% %
siz=max(size(X));
order=2;
points=order+1;
interpx=zeros(1,points);interpy=interpx;
t1=interpx;t2=interpx;
for i=1:siz
originx=X(i);
indexes=get_n_neighbours(points,X,originx);
interpx=X(indexes);interpy=S(indexes);
sp=polyfit(interpx,interpy,order);
sp1=polyder(sp);sp2=polyder(sp1);
tt1=polyval(sp1,originx);tt2=polyval(sp2,originx);
% t1=tt1;t2=tt2;
%
PP(i)=t1;
%
DDSS(i)=t2;
t1(i)=tt1;
t2(i)=tt2;
end
t1=fixborders(X,t1,1,2)'; %
t2=fixborders(X,t2,1,2)';
PP=t1';
DDSS=t2';
return;
%my_sp_extended=my_sp;
%fnxtr(my_sp,2); %linear outside domain
%CC(1)=fnval(my_sp_extended,zx(1));
%CC(nump)=fnval(my_sp_extended,zx(nump));
%siz=max(size(S));
%pvalues=0.99; %optimal value
%weights=[! 1 ones(l,siz-4) 1 1];
%my_sp=csaps(X,S,pvalues,[],weights);
i. smoothing aps
%my_sp=spapi(4,X,S); % 3rd order spline (cubic)
143
144
BIBLIOGRAPHY
siz=max(size(S));
pvalues=0 . 99; %optimal value
weights=[1 1 1*ones(1,siz-4) 1 1];
%my_sp=csaps(X,S,pvalues,[],weights);
my_sp=csapi(X,S);% no blending
sp1=fnder(my_sp,1);
sp2=fnder(my_sp,2);
cub=csapi(X,S);
cub1=fnder(cub,1);
cub2=fnder(cub,2);
dasdsadsa
blend=[O 0 0 0 0 ones(1,siz-10) 0 0 0 0 0]';
%must be between 0-->cubic and 1--->smoothed
PP=fnval(sp1,X).*(blend)+fnval(cub1,X) . *(1-blend);
DDSS=fnval(sp2,X).*(blend)+fnval(cub2,X).*(1-blend);
%must blend in csplines with smoothed ones at borders
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%{
siz=max(size(X));
interpx=zeros(1,points);interpy=interpx;
for i=1:siz
originx=X(i);
indexes=get_n_neighbours(points,X,originx);
interpx=X(indexes);interpy=S(indexes);
sp=polyfit(interpx,interpy,order);
sp1=polyder(sp);sp2=polyder(sp1);
t1=polyval(sp1,originx);t2=polyval(sp2,originx);
PP(i)=t1;
DDSS(i)=t2;
end
%}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function Q=calcQP2_splines(X,RH,prevRH,prevQ)
delt=RH-prevRH;
%rescale
pvalues=0.99-(delt>0)*0.5;
pvalues=0.99;
%siz=max(size(X));
Q=O;
%my_sp=spap2(1,3,X,RH);% least sq quadratic
%my_sp=spapi(4,X,RH);% 3rd order spline (cubic) %continuous 2nd
%my_sp=csapi(X,RH);% cubic spline
%my_sp=spaps(X,RH,O);% smoothing aps
BIBLIOGRAPHY
145
siz=max(size(RH));
my_sp=csaps(X,RH,pvalues',[],weights);
%smoothing aps
%my_sp=spap2(1,3,X,RH); %leastsq quadratic->correct behaviour for single gaussians
my_sp=csapi(X,RH);% cubic spline
%my_sp=spaps(X,RH,10--3);
my_sp_extended=my_sp;
%CC(1)=fnval(my_sp_extended,zx(1));
%CC(nump)=fnval(my_sp_extended,zx(nump));
sp1=fnder(my_sp_extended,1);
sp2=fnder(my_sp_extended,2);
t1=fnval(sp1,X);t2=fnval(sp2,X);
Q=-1/2*(t2+t1.-2);
%Q=Q;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%{
siz=max(size(X));
Q=O;
interpx=zeros(1,points);interpy=interpx;
for i=1:siz
%order=6;
%if(i<floor(points/2))l(i>(siz-floor(points/2)))
%order=2;
%end;
originx=X(i);
originy=RH(i);
indexes=get_n_neighbours(points,X,originx);
interpx=X(indexes);interpy=RH(indexes);
sp=polyfit(interpx,interpy,order);
sp1=polyder(sp);sp2=polyder(sp1);
t1=polyval(sp1,originx);t2=polyval(sp2,originx);
tempQ=-1/2*(t2+t1.-2);
Q(i)=tempQ;
Q=Q';
end
%}
function indexes=get_n_neighbours(n,X,originx)
[temps tempi]=sort(abs(X-originx));
146
BIBLIOGRAPHY
result=tempi(1:n);
indexes=sort(result)';
% return
siz=max(size(X));
me=tempi (1);
idx=1:n;
idx=me+idx-ceil(n/2);
adjl=min(idx(1)-1,0);
adjr=max(idx(n)-siz,O);
index=idx-adjl-adjr;
%siz=max(size(X));
%if(indexes(1)>1)&(indexes(n)<siz)
%indexes=(result(1):result(1)+n-1)-floor(n/2);
%end;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
4146
32