
Molecular Heterogeneous Catalysis, Wiley (2006), 352729662X
.pdf
446 Appendices
9.b. Car–Parrinello Ab Initio Molecular Dynamics
In their landmark paper, Car and Parrinello demonstrated that the electronic structure does not have to be converged to the Born–Oppenheimer surface at every time step throughout an ab initio MD simulation[50]. Instead, the orbitals can be propagated together with the atomic nuclei by assigning a fictitious mass to each electron. An important practical point in making this work is establishing the optimal step size to propagate the wavefunction. The electronic motion is still considered to be so much faster than that of the nuclei. The electrons can therefore be optimized with respect to changes in the nuclear positions. The Lagrangian for the Car–Parrinello algorithm is defined as follows:
LCP = |
1 |
I |
MI R˙ I2 |
1 |
i |
µi ψ˙i|ψ˙i − E[ψi, RI ] + |
i,j |
ij ψi|ψj − δij |
(A45) |
||
2 |
2 |
||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
The first term and second terms in Eq. (A45) correspond to the kinetic energy of the nuclei and the fictitious kinetic energy of the electrons in the system, respectively. The third
term reports the overall electronic energy which corresponds to the potential energy of the nuclei. The last term represents the constraints that the orbitals must be orthonormal[55].
The equations of motion for the nuclei and the electrons are then given as
.. |
∂ |
|
|
|
|
MI RI (t) = − |
|
|
(E[ψi(r, t), RI ]) |
|
(A46) |
∂RI |
|
||||
.. |
∂ |
(E[ψi(r, t), RI ]) + i |
|
|
|
µi ψi (t) = − |
|
|
ij ψj (r, t) |
(A47) |
|
δψi |
respectively.
In this approach, the nuclei are simulated at some finite temperature, T , which ultimately dictates the kinetic energy of the nuclei. The electronic structure, however, is kept close to the Born–Oppenheimer surface. The fictitious temperature of the electrons must therefore be close to zero. In simulating the dynamics for a specific system, the electrons must remain “cold” while the atoms must remain “hot” and thus maintain a nearly adiabatic system. The fictitious mass of the electron and the time steps for the dynamics must be carefully structured so as to prevent energy transfer from the hot nuclei into the cold electrons. The Verlet algorithm is typically used to integrate these equations.
In order to solve the equations of motions defined above, the forces on each of the ions must be defined. This can be done by using the Hellman–Feynman theorem, whereby the force is defined as the derivative of the total energy with respect to the positions of the ions:
fI = − |
dE[ψi(r,t), RI ] |
(A48) |
dRI |
The wavefunction, however, also has to change with changes in the coordinates for each particle. The total derivative of the energy with respect to the changes in the positions of the ions can therefore be written as
fI = |
∂E |
− i |
∂E ∂ψi |
− i |
∂E ∂ψi |
(A49) |
||||
|
|
|
|
|
|
|
||||
∂RI |
∂ψi ∂RI |
∂ψi ∂RI |
Computational Methods 447
The force defined in the Lagrangian is therefore not a physical force due to the second and third terms in Eq. (A45). If the wavefunction is an actual eigen state of the electronic Hamiltonian, then these last two terms are zero and so the forces calculated from Eq. (A48) are actual forces. This is known as the Hellman–Feynman theorem[55,57].
9.c. Applications
Ab initio molecular dynamic simulations have been used to study a wide range of problems including homogeneous and heterogeneous catalytic systems[49,53], reactions in solution [52], materials surface chemistry [54,55,57], biochemistry and biocatalysis. Its unique strength is its ability to follow both the dynamic changes of the nuclei along with the electronic structure. The rather limited time and length scales that can be reliably simulated are clearly limitations of this approach. Most ab initio methods are currently based on density functional theory. The foundation of these simulations is molecular dynamics, which is based firmly on statistical mechanics. Ab initio MD simulations can therefore be carried out within any of the di erent ensembles available to traditional MD simulations such as NVT, NVE and NPT. This enables one to calculate the structure, di usivities and the full range of thermodynamic properties.
The tracking of the electronic structure also provides for the ability to calculate activation barriers in addition to kinetics. One can simulate about a few hundred atoms out
to about 5 ps on current multiprocessor clusters. The typical time step for most catalytic purposes is on the order of about 1/20 fs. Blochl et al.[48] nicely illustrated that for a
reaction that has an activation barrier of 10 kJ/mol would require simulations on the order of 1.4 ps. Simulating reactions that have an activation barrier of 50 kJ/mol, however, would require up to 107 ps.
Two approaches can be taken to simulate reaction systems with such high barriers. The first involves raising the temperature of the simulation in order to access higher energy states. The second approach is to carry out a sequence of a constrained AIMD simulations along a specific reaction coordinate[51].
B: ATOMIC/MOLECULAR SIMULATION
The ability to model chemical reactions required the full accounting of the electronic structure of the system and the changes to the electronic structure upon reaction. The basic building blocks for describing the electronic structure are the electrons and the nuclei. Schr¨odinger’s equation then is simply just a force balance that operates on them to provide the total energy and the energy states of the system for a specific configuration. The ability to model the atomic structure in microporous materials, siting of sorbates, sorption isotherms and sorbate di usion requires the ability to simulate much larger systems and longer time scales. The changes in the electronic structure are not, however, germane to simulating the structural properties or dynamic responses of the structure for systems where electron transfer is not critical. The fundamental building blocks for these systems are the atoms and molecules from which they are comprised. Atomistic scale simulations must track the forces that occur between individual atoms. Systems which contain molecular entities track both the intraand inter-molecular forces that arise. In many of the simulations for catalysis, we are interested in modeling physisorption or di usion processes whereby the dominant forces that control these steps are weak van der Waal’s interactions which are usually very di cult to treat quantum mechanically. Atomic and molecular simulations which are based on force fields, however, are typically much

448 Appendices
better suited for modeling these weak interactions since they have been parameterized to handle such systems. Schr¨odinger’s equation provided the framework for formulating and simulating the forces on the electrons. The simulation of the forces that act on atoms and molecules is strongly rooted in and governed by statistical mechanics and classical dynamics.[58] This allows for the rigorous simulation of a wide range of thermodynamic and dynamic properties for the system of interest. There are a number of elegant reviews on di erent atomistic and molecular simulations and their application to catalysis, we would refer the interested reader to the several references[58−62] and the books by Frenkel and Smit[75], Allen and Tildesley[63], Leach[64] and Rapp and Casewit[65].
Atomic and molecular simulation methods can generally be categorized as either equilibrated or dynamic. Static simulations attempt to determine the structural and thermodynamic properties such as crystal structure, sorption isotherms, and sorbate binding. Structural simulations are often carried out using energy minimization schemes that are similar to molecular mechanics. Equilibrium properties, on the other hand, are based on thermodynamics and thus rely on statistical mechanics and simulating the system state function. Monte Carlo methods are then used to simulate these systems stochastically.
Following the dynamics for the system can e ectively be divided into three di erent categories: dynamic simulation of the system structure, dynamic simulation of both atomic and electronic structure, and the longer scale simulation of kinetics for a reaction system. In Appendix A, we described the ab initio molecular dynamics mehtod which is used to simulate the dynamics of the atomic structure along with the electronic structure. In the following section, we describe the formulation and solution to molecular or lattice dynamics. The simulation of kinetics is more involved and will be described in Appendix C.
1. Force Fields
At the heart of nearly all atomistic simulations is the force field used to describe the interaction between atoms or molecules. The accuracy of most atomistic simulations is highly dependent on the accuracy an applicability of the force field that has been developed. The force field contains both intraand interermolecular interactions. The contributions of the intra–molecular interactions to the potential energy are the result of changes in the bond length, bond angle and torsion angle from their standard positions. The bond length potential, for example, is expressed by a parabolic equation based on Hooke’s law that relates the potential energy to the di erences that result in the optimized bond length (ri) and a universal bond length for that specific type of bond (r0), as is shown in Eq. (B1). The terms for bond angle (θi for the calculated and θ0 for the universal) and torsion angle (φi for the calculated and φ0 for the universal) are similar and shown in Eqs. (B2) and (B3), respectively. Intramolecular forces are fairly standard for most force fields[75]
Bond length:
Nm −1
Vr = 12 KB (ri − r0)2 (B1)
i=1
Bond angle:
Nm −2
Vθ = 12 Kθ (θi − θ0)2 (B2)
i=1
|
Computational Methods |
449 |
Torsion angle: |
|
|
Nm −3 |
p |
|
|
|
|
i |
|
|
Vφ = |
Ci(cos φi)j |
(B3) |
=1 |
j=0 |
|
The terms KB , Kθ and Ci are simply the empirical coe cients for specific types, fit between experimental bond lengths, bond angles and torsion angles, respectively. The intermolecular potential energy terms attempt to capture di erent types of intermolecular interactions including electrostatic and dispersive forces. The intermolecular forces have been treated in various ways. The van der Waals interactions, for example, have been modeled via Lennard–Jones6-12, Morse and Buckingham type potentials[61]. In some cases these interactions are even neglected.
Coulombic:
|
|
|
|
|
N |
N |
|
qiqj |
|
||
|
|
|
|
|
i |
|
|
(B4) |
|||
|
|
VC = |
|
|
|
|
|||||
|
|
|
|
|
=1 j=1 4πε0rij |
|
|||||
van der Waals: |
|
|
|
|
|
|
|
|
|
|
|
VνDW = |
A0 |
− |
B0 |
|
(Lennard−Jones6−12) |
(B5) |
|||||
r12 |
r6 |
|
|
|
|
||||||
VνDW = Ae−Br |
|
|
|
|
|
(Exponential) |
(B6) |
||||
VνDW = Ae−Br |
− |
C6 |
|
(Buckingham) |
(B7) |
||||||
r6 |
|
|
where qi refers to the charge on atom i, ε is the dielectric constant of the medium, and A0, BO , A1 and C are the fitting coe cients.
The total potential energy (VT ) of the system can then be described by adding in all of the contributions from intraand intermolecular forces:
VT = Vr + Vθ + Vφ + VC + VνDW |
(B8) |
These equations comprise the “force field” and provide the foundation for nearly all atomistic and molecular simulations. The force field provides the potential energy which is used to carry out energy minimization to identify the most stable structures, Monte Carlo simulations to determine the properties of equilibrated systems and molecular dynamics to follow the dynamics of the system.
2. Energy Minimization Methods
Elucidating catalyst structure is important to understanding its potential reactivity. A great deal of work has been done to derive structure from atomistic simulations. Considerable progress has been made in the development of potentials that carry out energy minimizations in order to find the most stable structures for di erent metal, zeolite and metal oxide systems.
2.a. Metals
The shape, morphology and composition of metal particles and thin films for systems without the presence of a reacting gas alone can be simulated with a reasonable degree
450 Appendices
of accuracy since the potentials for these systems are typically fairly good. Metals have been described by using
1.embedded atom methods(EAM) [66]
2.modified embedded atom methods (MEAM) [67,68]
3.e ective medium theory (EMT) [69].
Since we talk very little about metal particle simulations in this book, we provide only a very general overview here. More detailed discussions of these methods can be found in the articles cited above. EAM, MEAM and EMT methods have been used quite e ectively with molecular dynamics in simulating physical vapor deposition processes used in thin film growth. These methods have also been e ective in understanding the lowest energy structures of the metal particles present for heterogeneous catalytic systems. These studies, however, have been limited predominantly to simulations in vacuum. Simulating the particle shape, morphology and composition under reaction conditions, however, has yet to be accomplished since these potentials typically only account for metal–metal bonding. Under reaction conditions the surface can be covered with strongly bound intermediates which can weaken metal–metal bonding and lead to significant changes in the surface structure as well as particle morphology. For a number of systems, the metal surface changes dynamically with reaction conditions. The ability to simulate these changes would require accurate adsorbate–metal potentials for all of the intermediates that could form. This is a significant challenge owing to the di culty in developing metal–adsorbate force fields. Recent progress by van Beurden et al.[70] on the development of MEAM potentials to describe CO on Pt for the simulation of Pt reconstruction[71] provide hope. The development of potentials that treat the complex and dynamically changing background composition in a reacting system, however, will be considerably more di cult.
2.b. Metal Oxides
Lattice energy minimization techniques have been used fairly successfully to simulate the lowest energy structures for various metal oxides and zeolites[61] . In this approach, the total potential energy of the lattice is defined based on the summation of the potential interactions between ions in the lattice and the remaining lattice:
1 |
N |
|
|
U = |
|
i |
(B9) |
2 |
Vi |
||
|
|
=1 |
|
In theory, the potential lattice can be calculated many-body interactions:
energy for the interactions between ion i and the remaining by calculating the summation of all pair, triplet, quartet, and
|
n N |
N N N |
|
|
|
|
|
Vi(r1, r2, . . . rn) = |
Uij (r1, rj ) + |
Uijk(r1, rj , rk) + . . . |
(B10) |
|
i−1 j>i |
i=1 j>i j>i |
|
Potential energy |
Pair interactions |
Triplet interactions |
|
These terms are nearly always truncated after accounting for only pairwise interactions. Extensions to triplet systems typically does not significantly alter the qualitative trends established from following only the binary interactions.
Computational Methods 451
The binary pair interactions can be modeled by using a force field to describe the system. Various di erent force fields for the simulation of oxides have been developed and employed. The most basic force field would employ both Coulombic and non-Coulombic interactions such as
U = |
qiqj |
+ φij (rij ) |
(B11) |
|
|||
|
rij |
|
|
Coulombic |
Non-Coulombic |
|
Ewald summation techniques are necessary for calculating Coulombic interactions. The non-Coulombic terms contain both attractive and repulsive components and can typically be modeled by using Lennard–Jones, Morse or Buckingham potentials from Eqs. (B5), (B6), and (B7), respectively.
Potentials that treat the polarization and ionization are important for modeling a number of metal oxide systems. This is di cult since polarization in solids is a manybody e ect with various components and depends strongly upon changes in the electronic structure as a function of structure and forces on the ions. One of the most widely used approaches to simulate polarizability e ects is that of the Shell model which uses a massless shell of charge (electron density)[61] .
The simulation of the optimized oxide structure requires the minimization of the energy with respect to the changes in the atomic structure of the oxide. One can use a variety of di erent numerical schemes that optimize the structure with respect to the structure of the lattice. Simulating annealing is a fairly robust numerical method that can be used to find the most stable structures. Simulated annealing attempts to mimic computationally how nature forms low-energy structures. The system is started at a higher temperature which allows it to sample various states along the potential energy surface. The system is then very gradually cooled to some final state. The simulation samples random moves for all of the atoms at each temperature. The total system potential is calculated after each trial move to determine whether or not the move is accepted. If the trial move leads to a lower energy system, the move is accepted. If the system energy is higher, the probability
that the move is accepted and follows a Boltzmann probability distribution: |
|
PAccept = exp(−∆U / kB T ) |
(B12) |
where ∆U is the change in energy between the system at its initial state and the trial state. This is accomplished by comparing a random number between 0 and 1 with the calculated probability, PAccept. If the random number is lower than PAccept, the move is accepted. This is known as Metropolis sampling.
Gale developed the General Utility Lattice Program (GULP), which is a general method towards simulating the structure and energetics for 3D molecular and ionic solids, gasphase clusters, and defect structures[72] . GULP is based on the Shell model described earlier.
It allows for the calculation of a range of structural, mechanical, and thermodynamic properties including relative energetics, sorbate siting, bulk modulus, Young’s modulus, dielectric constant, refractive index, piezoelectric constants, phonon frequencies, entropy, heat capacity, Helmholtz free energy, and other properties. The approach has been used to simulate a wide range of di erent oxide materials including zeolites, silicates, aluminophosphates, ceramic glasses and transition-metal oxides.

452 Appendices
3. Monte Carlo Simulation–Equilibrium Systems
The thermodynamic properties for a system of N molecules (or N atoms) can be rigorously accounted for using statistical mechanics. Monte Carlo simulation methods provide the foundation for numerically simulating the configurational integral shown in Eq. (B13)
that arise from the statistical mechanics treatment. |
|
|
|
|
|
Z = drN exp − U (rN /kB T ) |
(B13) |
where rN refers to the set of generalized coordinates for the N -particle system.
Monte Carlo integration allows the integral to be calculated by stochastically sampling a large discrete set of random configurations defined here as the number of MC sample steps (NM Csteps). The configurational integral can then be calculated using
V
Z = NM Csteps
NMCsteps |
|
|
|
i |
(B14) |
exp (−U (rN /kB T ) |
|
=1 |
|
A full range of thermodynamic properties can then be calculated via statistical mechanics. The average of some property <A> for the system is then defined as the average of A over all of the di erent configurations generated from Monte Carlo sampling:
A = |
|
|
U−(rN ) |
|
|
= |
i M |
|
= |
|
|
Am |
(B15) |
||
|
A(rN )exp |
|
U (rN ) |
drN |
|
M |
|
|
|
M |
|
|
|||
|
|
exp |
|
|
|
drN |
|
i=1 |
1 |
|
M i=1 |
|
|
||
|
|
|
|
|
kB T |
|
=1 Am |
|
1 |
|
|
|
|||
|
|
|
− |
kBT |
|
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
Various methods have been used in the literature based upon the properties one wishes to simulate and thermodynamic considerations of the system being studied. A number of these methods are described below.
3.a. Canonical Ensemble (NVT) MC Simulation
In the canonical ensemble, the number of molecules (or atoms), the volume and the temperature all remain constant[4,58]. Simulations then attempt to minimize the Helmholtz free energy of the system. These simulations are used to determine the pressure in the system, lowest energy states and the optimized structures. The simulations are performed by stochastically sampling a large number of di erent configurations for the system where the system is restricted to obey constant number of molecules, volume and temperature. The Metropolis sampling scheme presented earlier in Section 2.b is then used in order to accept or reject each trial move. The simulation proceeds by simulating millions of di erent trials in order equilibrate the system. In many of the simulations of sorbates in microporous media or on surfaces, the simulations are coarse-grained so that the position of each specific atom is foregone in order to speed up the simulations. Instead, the system is described using the “United” atom approach whereby only the heavy atoms are explicitly treated. For example, the hydrogen atoms in CH3, CH2, or CH are collapsed into the description of the united C atom. The united atom method is also used in the subsequent methods that will be described as well.
Isothermal–isobaric simulations are performed by holding the number of molecules (atoms), pressure and temperature constant. The simulation can then be used to determine the corresponding volume in the simulation. The Gibbs free energy in this system is minimized.

Computational Methods 453
3.b. Grand Canonical Ensemble (µ, V, T) MC Simulation
The grand canonical ensemble simulations model systems in which the chemical potential (µ), the volume and temperature are held fixed while the number of particles changes. The approach is very useful for simulating phase behavior which requires a constant chemical potential. Grand Canonical Monte Carlo simulation has been used to calculate sorption isotherms for a number of di erent microporous silicate systems. The simulations are used to model the equilibrium between zeolite and sorbate phases and, as such, it provides a natural way of simulating sorption isotherms[59,62]
The simulations proceeds by first using a gas-phase equation of state to determine the pressure and the fugacity for the gas phase. The simulation then follows a series of trial moves which involve particle displacement, particle insertion and particle removal in order to establish equilibrium. The particles (molecules) in the simulation box are allowed to move, rotate or rearrange their configuration based upon the Boltzmann-weighted Metropolis sampling probability described earlier in Eq. (B12). In order to establish a constant volume, temperature and chemical potential, the number of molecules in the box can increase or decrease. In addition to the displacement moves described already, particle insertions and particle removals are also present. A new particle or molecule can be inserted into the system at a randomly chosen point based on the following probability:
PAccept = |
f V |
|
kB T (N + 1) exp(−∆U/kBT ) |
(B16) |
where f is the gas-phase fugacity, V is the volume, N is the number of particles (molecules) before the insertion and ∆U is the change in potential energy due to insertion.
The removal of particles from the systems is governed by the following probability equation.
PAccept = |
N kB T |
exp(−∆U/kB T ) |
(B17) |
f V |
Simulations typically require millions of displacement, insertion and removal moves in order to equilibrate the system. The result is an adsorption equilibrium between the sorbate molecules in the gas phase and those adsorbed on the zeolite at the specific gas-phase fugacity. This would represent a single point on an adsorption isotherm. The remainder of the isotherm curve can be generated to determine the amount of gas adsorbed at various other pressures[59,62] .
Grand Canonical simulations have been used fairly successfully in simulating singlecomponent systems. More recent papers show that the method can also be used to simulate binary systems and also mixtures[73,74].
Grand Canonical MC simulation tends to work fairly well for small-molecule systems but fails for larger molecules owing to the very low acceptance probabilities for insertion moves into the system owing to the interactions between the sorbate and the zeolite or other sorbate molecules. The molecule has a di cult time taking on the preferred configuration for it to fit into the system. Configurational biasing, as discussed next, helps to overcome this problem.
454 Appendices
3.c. Configurationally Biased Monte Carlo Simulation (CBMC)
Configurationally biased methods can be used within the simulation to avoid the di - culties that result from the low probability of insertion [58−60,62,75]. This is accomplished by allowing molecule to insert sequentially atom-by-atom. This avoids the di culty of having the molecule adapt to limited number of configurations before it can insert. It now guides the adsorbate atom-by-atom to adapt the appropriate configuration. This, however, biases the statistical likelihood of insertion. The acceptance rules must therefore be changed in order to correct for the bias. Configurationally biased methods can lead to significant enhancements in CPU expenditure, thus allowing for simulations of systems that are typically not possible without the bias[59]. Configurational biasing is most widely adopted in Grand Canonical and Gibbs Ensemble Monte Carlo methods.
3.d. Gibbs Ensemble Monte Carlo simulation
Gibbs ensemble Monte Carlo simulation is predominantly used to simulate phase equilibrium for fluids and mixtures. Two fluid phases are simulated simultaneously allowing for particle moves between each phase[58,76,77].
3.e. Applications of Monte Carlo Simulation
Monte Carlo simulation has been used to simulate the optimized structures for zeolites, metal oxides and metals. In addition, it has been used to simulate the siting of sorbates, Henry’s Law constants, heat capacities, isosteric sorption isotherms and other thermodynamic properties.
4. Molecular Dynamics
The simulation of dynamic properties such as di usivities requires the use of dynamic methods. Molecular dynamics methods integrate Newton’s laws of motion in order to follow the dynamic behavior of a system. Individual molecule or particle trajectories are obtained by solving Newton’s second law to establish the positions for all of the molecules or particles at some new time t + dt. The velocities of each particle along a specific vector can be determined by integrating the following equation with respect to time. A second integration similarly leads to the positions for each particle over time[59,63,64].
d |
2 |
→ |
|
F |
→r i |
|
|
|
r i |
= |
|
(B18) |
|||
dt2 |
mi |
||||||
|
|
The new position is dependent on the forces F→ which act upon particle i along the
→
r i
vector r . The forces are determined from the force field for the system. Similarly, one can solve for the forces acting on all of the particles along all specific vectors. The forces can be calculated from the potential energy, u, with the equation
Fi = −ri u |
(B19) |
These equations are integrated simultaneously via finite di erenc methods. A Verlet algorithm is typically used in carrying out the integration whereby the new positions and velocities, and accelerations for the particles are calculated from the previous positions[59,63,64]:
r(t + δt) = r(t) + δtν(t) + |
1 |
δt2a(t) + |
1 |
δt3b(t) + . . . |
(B20) |
|
2 |
6 |
|||||
|
|
|
|
|
|
Computational Methods |
455 |
|
ν(t + δt) = ν(t) + δta(t) + |
1 |
δt2b(t) + . . . |
(B21) |
|
2 |
||||
|
|
|
||
b(t + δt) = b(t) + δtc(t) + . . . |
(B22) |
The simulation of the dynamic processes of atoms and molecules requires time steps which are on the order of 10−15 s in order to follow intermolecular forces accurately. This results in substantial CPU requirements that ultimately limit the length of real time that can be simulated to nanosecond range.
The molecular dynamics approach outlined so far is formally for NVE systems. Many of the problems in catalysis require a constant temperature rather than a constant energy. The temperature, however, is related to the time-average kinetic energy of the system
through the equation |
3 |
|
|
|
κ N V T = |
N kB T |
(B23) |
||
|
||||
2 |
Equation (B23) suggests that the temperature could be controlled by scaling the velocities. The scaling factor can be calculated from the following expression, which is simply derived from Eq. (B23):
|
1 |
N |
2 mi (λνi)2 |
1 |
N |
2 miν12 |
2 |
|
|||||||||
|
|
|
|
|
i |
|
|
|
|
|
|
|
|
|
|
= (λ − 1)T (t) |
(B24) |
|
|
∆T = 2 |
3 N kB |
|
|
|
3 N kB |
||||||||||
|
|
=1 |
− 2 i=1 |
||||||||||||||
A |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
where λ = |
|
TN EW /T (t). |
|
|
|
|
|
|
|
|
|
|
|
|
second approach to controlling the temperature would be to include a heat bath whereby the bath can supply or remove energy from the system[59,63,64]. These approaches, however, do not rigorously conform to canonical averages.
Two methods that have been developed that do maintain correct canonical averaging are the stochastic collision and the extended systems approaches. Both are covered in detail elsewhere[59,63,64,78,79]. We report here only on some of the salient features from the extended systems approach since this approach is used primarily for constant temperature MD simulations for heterogeneous catalytic materials.
The extended system method was developed by Nose[78] and subsequently by Hoover [79], who considered the thermal reservoir to be an integral part of the system. The
inclusion of the reservoir requires an additional degree of freedom, defined as s, be added to the system. The potential energy for this additional degree of freedom is calculated as
u = (f + 1)kB T ln(s) |
(B25) |
|||||
where f is defined as the number of degrees of freedom. |
|
|||||
The kinetic energy for this additional degree of freedom is calculated as |
|
|||||
K.E. = 2 |
dt |
2 |
(B26) |
|||
|
Q |
|
ds |
|
|
|
where Q is fictitious mass defined for the additional degree of freedom. Q determines the energy flow between the extended and the real system.