Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Cramer C.J. Essentials of Computational Chemistry Theories and Models

.pdf
Скачиваний:
399
Добавлен:
08.01.2014
Размер:
7.27 Mб
Скачать

434

12 EXPLICIT MODELS FOR CONDENSED PHASES

where λ is broken up into individual intervals of length (they need not all have identical widths, but in typical practice they do). Thus, for instance, in the HCN case we might decide to divide λ into 20 segments having a width of 0.05 each. In the first ensemble, generated for λ = 0, the energy difference associated with interaction with atom HD would no longer be computed using Eq. (12.14), but instead according to

=

 

 

rHB HD

rHB HD

+ εrHB HD

+

 

 

rHA HD

 

rHA HD

 

+ εrHA HD

 

EHD

0.05

aHH

 

bHH

 

 

qHB qHD

 

 

 

0.95

aHH

 

 

bHH

 

 

qHA qHD

12

 

 

 

6

 

 

εrHA HD

 

 

 

12

 

 

 

6

 

 

 

 

 

 

rHA HD

rHA HD

+

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

aHH

 

 

bHH

 

 

qHA qHD

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

12

 

 

 

6

 

 

 

 

 

 

 

 

 

 

rHA HD

 

 

 

 

 

 

εrHA HD

=

 

 

rHB HD

rHB HD

+ εrHB HD

rHA HD

+

 

 

0.05

 

 

aHH

 

 

bHH

 

 

 

qHB qHD

 

 

 

 

aHH

 

 

bHH

 

 

qHA qHD

(12.17)

 

12

 

6

 

 

 

 

 

 

 

 

 

12

 

 

6

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The effect is to remove 95 percent of the unfavorable consequences of materializing the HB atom, making the ensemble hopefully more relevant for the chimeric molecule than for ‘full’ HNC. Once sufficient statistics have been collected for this window, a fresh ensemble is generated using a simulation for the chimera with λ = 0.05, and evaluating the energy difference between λ = 0.10 and λ = 0.05. This process is repeated, interval by interval, until λ reaches 1, at which point all of the Helmholtz free-energy changes for each interval are summed together to give the total for HCN to HNC.

By creating new ensembles with each increase in λ, potentially offending water molecules in the region of the nitrogen atom like the one mentioned above are ‘eased’ out of the way, since in each new ensemble the presence of HB becomes more manifest. The cost, however, is that now 20 simulations need to be undertaken instead of one (assuming an interval width of 0.05 as in the example).

When one is generating an ensemble for a fractional value of λ, it is equally easy to evaluate the energy change for λ as it is for λ + . The former is equivalent to imagining the reaction not as HCN → HNC but rather HNC → HCN. Evaluation in this fashion thus simultaneously determines the forward and reverse free-energy changes from the identical ensemble. In principle the free-energy change computed for the interval [λ λ + ] should be exactly the opposite of that computed for the interval [λ + λ]. In practice, however, this is rarely true, and the variations provide some indication of the potential error in the FEP process. For instance, in Figure 12.2 the reverse mutation predicts a negative free-energy change slightly larger in magnitude than the positive free-energy change for the forward mutation. This difference is sometimes reported as the error in the simulation. Because the free-energy change should be linear in λ if Eq. (12.15) is used (dotted line), the hysteresis of the FEP diagram is sometimes used as a more conservative estimate of the error.

An alternative procedure is known as ‘double-wide sampling’. In this case, the ensemble is generated by MC or MD methods for the Hamiltonian corresponding to a given value of λ, but the evaluation of the free energy change is for the interval [λ − 0.5λ + 0.5]. Thus, the total interval width is still , but the evaluation is over half-step changes left and right in the Hamiltonian parameters. In principle, this may lead to improved sampling

12.2 COMPUTING FREE-ENERGY DIFFERENCES

435

G

0

l

1

Figure 12.2 A typical FEP diagram showing the free-energy change in the forward (above) and reverse (below) directions for a λ-coupled mutation

because neither endpoint is evaluated using a Hamiltonian that is more than 0.5from the Hamiltonian used to generate the ensemble. Further discussion of technical points and error analysis is deferred to Section 12.2.6.

12.2.3Slow Growth and Thermodynamic Integration

In Eq. (12.16), one may imagine taking λ intervals so small that E on any given interval is arbitrarily close to zero. In that case, we may represent the exponential as a truncated power series, deriving

B A = →0

1

B

 

 

+

kBT

λ

λ 0

T ln

A

A

lim

 

k

1

 

(Eλ+Eλ)

(12.18)

=

This expression may be further simplified by noting that ln(1 + x) is well approximated by x for sufficiently small values of x, so that we may write

B − A = →0

1

 

B

 

 

 

kBT

λ

λ 0

T

 

A

A

lim

 

k

 

(Eλ+

Eλ)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

=

 

 

 

 

 

 

 

 

 

 

 

 

 

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

lim

 

 

(E

 

 

 

E )

 

 

 

= 0

 

 

λ+

λ

λ

 

 

 

λ=0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

 

 

 

 

 

 

 

 

 

 

 

 

lim

 

 

λ+

 

 

 

 

(12.19)

 

 

 

(E

E

)

 

 

 

= 0

 

 

 

 

λ

 

 

 

 

 

λ=0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

436

12 EXPLICIT MODELS FOR CONDENSED PHASES

The removal of the ensemble average over the λ ensemble in the final line on the r.h.s. reflects the protocol of this technique, the so-called slow-growth method. It is assumed that if the Hamiltonian is infinitesimally perturbed at every step in the simulation, then the system will constantly be at equilibrium (following some initial period of equilibration), so separate ensemble averages need not be acquired.

In practice, then, the slow-growth technique is rather different from FEP when it comes to evaluating E. Since each change in λ is also a step in the simulation, all of the intrasolvent energy terms change in addition to the solvent–solute interaction terms. With respect to the latter terms, however, the evaluation is similar to FEP in that chimeric molecules are involved.

A third simulation protocol for determining Helmholtz free-energy differences can be illustrated from further manipulation of Eq. (12.19). Thus we may write

 

 

 

1

 

 

 

 

 

 

 

B A = 0

 

 

 

 

λ+

λ λ

 

A

A

lim

 

 

(E

 

E )

 

 

 

λ=0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

= λ→0

1

 

 

 

λ

 

λ

 

 

 

λ 0

 

 

λ

 

 

lim

 

 

(Eλ+ λ Eλ)

 

 

= 1

=

 

 

 

 

 

 

 

 

 

∂E

 

 

 

 

 

 

 

 

 

 

 

 

0

∂λ

 

λ

 

 

 

 

 

 

 

1

∂E

 

 

 

 

 

 

 

 

 

λ 0

λ λ

 

 

(12.20)

 

 

∂λ

 

 

 

 

 

 

 

 

 

 

 

 

 

=

where we first recognize the calculus relationship between the sum appearing on the r.h.s. in the second line and the definite integral in the third line (and simultaneously the definition of the partial derivative), and we then approximate the definite integral as a sum over small intervals. While the transformation from line 3 to line 4 may appear to simply reverse the transformation from line 2 to line 3, this is not the case, because the partial derivative remains in its analytic form; this is possible because most simulations evaluate the energy using E(λ) functions that are trivially differentiated. Moreover, λ in the final line is no longer infinitesimally small, i.e., this is a standard estimation of an integral by division of the integration range into discrete intervals with the function approximated over each interval by a single value, in this case the value at the start of the interval. This process defines the thermodynamic integration (TI) method. [TI can be derived in a much more rigorous and general way, and indeed, FEP may be regarded as a special case of TI; interested readers are referred to the bibliography at the end of the chapter.]

It is evident that TI and FEP are similar in that they involve multiple simulations over different windows λ, with accuracy expected to increase when more and smaller windows are employed. However, there are key differences as well. In TI, the ensemble average for one value of λ is not used to evaluate any energies involving a different value of λ; only the ensemble average of the energy derivative is accumulated. Moreover, different forms of E(λ) may be conveniently evaluated, corresponding to different mutation paths from A

12.2 COMPUTING FREE-ENERGY DIFFERENCES

437

to B. For example, one may choose the generalization of Eq. (12.15)

E(λ) = λnEB + (1 − λ)nEA

(12.21)

where n is an arbitrary exponent that may be freely chosen. [Note that for both FEP and TI, one may also couple λ more intimately into individual force-field terms; the only requirement is that the correct limits be maintained, i.e., E(λ = 0) = EA and E(λ = 1) = EB.]

12.2.4Free-energy Cycles

The discussion thus far has ignored certain rather tricky technical issues as well as certain very real practical difficulties that can arise in various types of simulations. Often, these problems can be avoided by the invocation of a free-energy cycle. For instance, Jorgensen and Ravimohan (1985) invoked such a cycle to study the difference in the free energies of aqueous solvation for methanol and ethane (Figure 12.3). The calculation of the absolute free energies of solvation for each of these two molecules would be subject to large errors, because the necessary perturbation would involve growing the molecules from ‘nothing’ both in the gas phase and in a box of water. While the former is a trivial exercise, the introduction of a solute into an equilibrated water box is a very difficult affair because no matter how small the first step is taken to be, there is a strong possibility of introducing the solute atoms into regions that result in unphysically high energies, thereby generating a poor sample. The difference between two solvation energies each with high associated errors would then have a still higher error, and might not be particularly useful as a result (for recent advances in addressing the challenge of computing absolute solvation free energies, see, for example, Aberg˚ et al. 2004). Shirts et al. (2003) have demonstrated that the opposite process, i.e., disappearing a solute molecule from a water box, can be more useful for computing absolute solvation free energies, but a substantial commitment of computational resources is still required.

G og

CH3OH(g) CH3CH3(g)

G so(CH3OH)

 

G os(CH3CH3)

 

 

 

CH3OH(s) CH3CH3(s)

G oaq

Figure 12.3 The vertical sides of this free-energy cycle correspond to free energies of aqueous solvation, while the horizontal sides correspond to chemical mutations that are not physically realistic but are accessible by FEP. The difference between the two vertical quantities must be equal to the difference between the two horizontal quantities. While the former difference is easier to measure, the latter is easier to compute

438

12 EXPLICIT MODELS FOR CONDENSED PHASES

However, if one is indeed interested primarily in the difference in the solvation free energies, and not the absolute values, one can carry out the necessary two simulations in a completely different fashion. Instead of growing each solute molecule from nothing, one is transformed into the other using a chimeric approach (Jorgensen and Ravimohan used FEP). By the state-function nature of the free energy, the difference in the transmutation free energies in the gas phase and in aqueous solution must be equal to the difference in the absolute solvation free energies. A single transmutation in water is far simpler to carry out with good statistical accuracy than two separate ‘creations’ of solutes, and again the gas-phase mutation is trivial. Representing the solute molecules using the OPLS force field, Jorgensen and Ravimohan determined an aqueous solvation free-energy difference of 6.75 ± 0.2 kcal mol−1, which is in good agreement with the experimental value of 6.93 kcal mol−1.

Note that, in principle, one could use FEP to determine a ‘web’ of solvation free-energy differences between many different substrates, and then carry out a single calculation of an absolute free energy of solvation (i.e., growing one solute molecule from nothing) that would serve as an anchor to convert all of the relative free energies of solvation into absolute free energies of solvation (for the example of a set of substituted benzenes in water, and a comparison to predictions from the SM2 continuum model, see Jorgensen and Nguyen 1993).

Because they used a free energy cycle, Jorgensen and Ravimohan assumed that changes in the kinetic energy component of the mutation would cancel in the gas phase and in solution, so they did not compute them, i.e., they reduced the size of the phase-space problem for Monte Carlo sampling by a factor of 2 by removing all momentum degrees of freedom. This simplifying assumption remains standard in modern calculations. This is true for constant temperature MD simulations as well, since scaling the velocities to maintain temperature necessarily distorts the momentum sampling – modern simulations typically evaluate only the potential energy differences between mutated structures.

Free-energy cycles can be used to simplify simulations covering a wide variety of processes. For example, if we were interested not in the difference in free energies of solvation of methanol and ethane, but the difference in their partitioning between water and octanol, we can simply rewrite Figure 12.3 so the upper leg is in octanol instead of water, and carry out the same mutation of methanol to ethane described above, now once in octanol and once in water, to determine the free energy difference. The alternative procedures for analyzing the vertical legs would be very unpleasant indeed: we would either have to grow each solute from nothing in two different solvents, or we would have to mutate one solvent into another, which would be even worse.

A free-energy cycle finding particularly widespread use is one for evaluating differences in interactions between enzymes (or other molecular hosts) and alternative molecules in their active sites. By mutating one substrate into another, both in the presence of the enzyme and isolated in solution, differences in free energies of binding may be determined (Figure 12.4). An example is provided in Section 12.6 as a case study.

12.2 COMPUTING FREE-ENERGY DIFFERENCES

439

 

E(s) + S(s)

 

 

 

E • S(s)

 

 

 

∆G°

(1)

 

 

 

 

 

 

aq

 

 

 

 

0

 

 

 

∆∆G°

 

 

∆∆G°

 

 

 

 

 

 

mut(S)

 

mut(E S)

 

 

 

 

 

 

 

 

 

 

 

E(s) + S′(s) E • S′(s)

∆G°aq(2)

Figure 12.4 Differential binding free-energy cycle. The difference in binding free energies for two different substrates, S and S , is equal to the difference in mutation free energies for changing S into S in solution, and EžS into EžS in solution. The leftmost vertical free-energy change is zero, since the free enzyme is a constant independent of substrate

12.2.5Potentials of Mean Force

When free energy is expressed as a function of coordinate, it is referred to as a potential of mean force (PMF). The PMF W can be determined as

W (q) = −kBT ln π(q)

(12.22)

where q is the coordinate, and π is the probability of the coordinate taking on a particular value, i.e.,

π(q) = Q−1 δ[q (q) q]eE(q,p)/ kB T dqdp (12.23)

where Q is the (normalizing) full partition function, δ is the Dirac delta function, and q is the value of the PMF coordinate for any arbitrary point in phase space having positional coordinates q.

In practice, one may evaluate these probabilities following a histogram approach like those outlined in Chapter 3. Over the course of a MC or MD simulation, the value of q is collected and binned, and the probability of different ranges of values can be determined upon completion of the simulation based on the number of points in a bin compared to the total number of points. For example, we might be interested in the PMF for rotation about the C−O bond in fluoromethanol (see Figure 2.3). Over the course of a simulation, the torsional angle would be saved at every step, and with good sampling a probability histogram would permit conversion to a PMF accurately reflecting the true potential. In the case of fluoromethanol, the difference in energy between the lowest and highest points on the potential energy curve is about 3 kcal mol−1. At 298 K, we would thus expect to sample points in the highest energy region about 100 times less frequently than points in the lowest energy region. Of course, if the width of a bin is, say, one degree, there are many other possibilities for bins to fill, and ultimately roughly one point in every 10 000 or so would be statistically expected to fall into the highest energy bin. To obtain reliable statistics, we

440

12 EXPLICIT MODELS FOR CONDENSED PHASES

might want this least populated bin to contain at least 100 points, so we would require a sample of some 1 000 000 snapshots. An ensemble of this size is accessible with current computational technology, but represents a reasonably significant investment of resources.

Now, consider if the highest energy point on the curve were to be 6 kcal mol−1 above the lowest at 298 K. Because the probability involves the exponential of the energy difference, doubling the difference squares the sampling ratio (i.e., the highest energy region is now sampled 10 000 times less frequently than the lowest energy region). Obtaining a statistically meaningful sample of low probability regions now becomes a significantly more difficult prospect, and statistically reliable PMFs cannot be obtained in this fashion.

The problem of low probability regions is even more severe when it comes to chemical reaction coordinates, where free energies of activation for chemically viable processes may range well above 20 kcal mol−1. The probability of obtaining a snapshot in the region of a transition state structure having so high an energy (assuming for the moment that we have some Hamiltonian capable of describing bond-making/bond-breaking) is so remote that no brute force simulation can legitimately expect to capture even one relevant point, much less a statistically meaningful sample. This is the problem of sampling ‘rare events’.

One approach to overcoming this problem is to apply a so-called ‘umbrella potential’ or biasing potential. This potential, a function of the coordinate of interest q, is added to the force-field energy with the aim of forcing q to be sampled heavily within a certain range of values that would not otherwise be statistically accessible. An ideal umbrella potential is one that is the exact negative of the PMF, since then the probability of sampling any value of q should be uniform. However, one rarely knows the PMF ahead of time (otherwise why would one be trying to calculate it?), so instead one typically applies rather simple biasing potentials (e.g., a quadratic potential) to force q to be sampled over some interval including a particular value q0.

Consider, for instance, the SN2 reaction of Brwith CH3Br in aqueous solution, which has an activation free energy on the order of 20 kcal mol−1. If we define our reaction coordinate as

q = rC – BrA rC – BrB

(12.24)

where A and B are the incoming and outgoing bromide ions, respectively, we see that the reactants correspond to large positive values of q, products to large negative values of q, and from our knowledge of bimolecular nucleophilic substitution reactions, we know that the transition state region will have values of q very near zero. Let us assume that we have a force field that provides an accurate potential energy curve in the gas phase for this SN2 process – in spite of this, in a normal MC or MD simulation in a box of water we would be very unlikely to sample in regions anywhere near the TS because of the very low probabilities associated with such high-energy structures. However, if we apply biasing potentials of the form

U (q) =

1

 

2 k(q q0)2

(12.25)

where q0 is the particular value near which we want to sample, and we select the force constant k to be suitably large, we can ensure that the simulation will sample heavily within

12.2 COMPUTING FREE-ENERGY DIFFERENCES

441

some distance of q0, since structures having values of q significantly different from q0 will be heavily penalized by addition of U .

When this procedure is followed, a different probability function π (q) will be obtained over the sampled region. The correct PMF (i.e., for the unbiased potential) is related to the new probability function according to

W (q) = −kBT ln π (q) U (q) kBT ln eU (q)/ kB T

(12.26)

where the ensemble average is accumulated with the biasing function added to the system Hamiltonian. This function is quite simple to evaluate for a typical selection of U . However, it is often the case for an unknown PMF that a single choice of functional form for U will not lead to a statistically useful sample over the entire range of interest for q. Instead, one carries out several simulations, with different choices for U (for instance, by varying choice of q0 in Eq. (12.25)), and then patches together the relevant regions of the PMF to generate a single curve. This process is illustrated in Figure 12.5. Obtaining a good overlap of the individual pieces can be difficult in some instances, which contributes to error in the method when overlap is required. Indeed, Figure 12.5 is somewhat misleading, since each individual PMF fragment actually rises to infinitely positive free energy values at either end (that is, the probability of finding the system far to the right or left becomes so small that the corresponding free energy is very large). As these PMF walls have no physical meaning, but are artifacts of the umbrella function, they have been left out of Figure 12.5 for clarity, but in practice they can add to the difficulty associated with reliably overlapping different segments of the full reaction coordinate. The weighted histogram analysis method (WHAM;

G

 

 

R

q

P

Figure 12.5 A reaction coordinate q constructed piecewise from reactants R to products P as a series of PMFs determined using different umbrella functions. The individual PMFs determined using Eq. (12.26), shown below the dashed line and taking each left endpoint as the relative zero, are held within their respective regions of the reaction coordinate by the umbrella function. Their overlap on a common energy scale generates the complete PMF shown above the dashed line

442

12 EXPLICIT MODELS FOR CONDENSED PHASES

Kumar et al. 1992) is one of the more popular approaches for accomplishing this overlap; the details of WHAM, however, are beyond the scope of this text.

An alternative to extracting the proper PMF from one generated using a biasing potential is to employ the so-called constraint-force method. In this model, one or more degrees of freedom are held to a series of N fixed values (for simplicity we will continue to work with only one dimension q ranging then from q1 to qN ). For a given fixed value qi , with this value differing from qi+1 by a small amount qi , the value of ∂W/∂q is evaluated. Once all average derivative values are in hand, it is a simple matter to reconstruct W by numerical integration, i.e.,

qj ∂W

W (qj ) =

qmin ∂q

W (q1) +

dq

 

 

j −1

(12.27)

∂W (qi )

 

 

qi

 

 

i=1

∂q

 

 

If readers fail to find inspection of Eqs. (12.22) and (12.23) particularly enlightening with respect to how precisely to evaluate ∂W/∂q, they may consider themselves to be in good company. After substantial debate in the literature, the proper and rather complicated approaches to computing this quantity for the one- (den Otter and Briels 1998) and multidimensional (den Otter and Briels 2000) cases have been derived. In addition, Darve and Pohorille (2001) have described a generalization of this approach in which simulations may be run without the imposition of any constraints. In that case, a biasing potential still needs to be applied globally so that the system samples q fully, but the numerical integration of Eq. (12.27) avoids the problem of overlapping partial PMFs illustrated in Figure 12.5.

Rosso et al. (2002) have proposed an alternative sampling method in which the likelihood of rare event observation is enhanced by separating the reactive coordinate(s) from the remaining degrees of freedom and propagating the former components of the trajectory at high temperature with a fictitiously high mass. This combination permits the other degrees of freedom to respond adiabatically to the reactive coordinate(s), which are themselves able to generate a more complete unbiased free energy profile by virtue of the high temperature.

Irrespective of the protocol used for enhanced sampling, a key difficulty arises when the reaction mechanism is not well understood. In that case, even the definition of the reaction coordinate q can be problematic. This is a common problem in the simulation of enzyme active sites, where bond-forming or bond-breaking reactions may or may not occur with simultaneous proton transfer(s) between enzyme and substrate functional groups. In the event of multiple bond-making/bond-breaking events occurring simultaneously, it becomes quite difficult to construct suitable one-dimensional slices and biasing potentials through phase space that permit generation of useful PMFs.

In sum, the generation of accurate PMFs from probability distributions for processes with free energies of activation in excess of a few kilocalories per mole continues to be a significant challenge for modern simulation methods. Some alternative approaches, using both continuum and explicit solvation models, are discussed in Section 15.4.

12.2 COMPUTING FREE-ENERGY DIFFERENCES

443

12.2.6Technical Issues and Error Analysis

Free-energy simulations are extremely demanding in a technical sense, and it is well beyond the scope of this book to fully prepare readers to apply the technology without further instruction. Nevertheless, there are a few technical issues that arise (on top of those already discussed for simulations in general in Section 3.6) that merit attention insofar as they affect many published free-energy simulation results. Much more authoritative treatments are available in the bibliography and suggested reading.

When perturbations from one molecule to another are carried out, there are two distinct approaches that may be taken. The ‘single topology’ approach involves a single solute species that is smoothly transformed from the first molecule to the second as a function of λ. In the HCN/HNC example above, the single topology approach would involve not only the steady disappearance of the carbon-bound hydrogen and the appearance of the nitrogenbound hydrogen, but also any change in the C –N equilibrium length and force constant as it transforms from a nitrile to an isonitrile bond type. In addition, if the atomic partial charges on C and N were to be different for nitriles and isonitriles, these too would change as a function of λ. The solute molecule at intermediate values of λ is thus truly chimeric.

The ‘dual topology’ approach, on the other hand, involves having the distinct initial and final solutes simultaneously present, but no force-field interactions between the two are ever calculated. The interactions of both are calculated with the surrounding medium in the normal way, but at intermediate values of λ the total energy of the system will be derived as a λ-dependent function of the two. The dual topology approach is simpler in implementation but problems can arise if the two topologies drift away from one another during the course of the simulation (for instance, if one solute were to leave the active site of an enzyme while the other stayed in it, obviously the difference in binding free energies would not be calculable). Both single and dual topology calculations continue to see about equal use.

As already mentioned above, the sudden appearance of atoms at positions in space occupied by solvent molecules as the result of a mutation can lead to severe sampling problems. As a rule, changes in van der Waals interactions must be introduced much more slowly than changes in charge in order to maintain good equilibrium in ensemble averages. Since a free-energy change is independent of the mutation path (assuming perfect sampling), paths that carry out changes in charges more quickly than changes in van der Waals interactions are not uncommon.

The discussion in this chapter has focused almost exclusively on computing changes in the Helmholtz free energy A. However, most experimental measurements are carried out at constant pressure, not constant volume, so the majority of thermochemical data is in the form of Gibbs free energies G. As long as the total number of particles in a free-energy simulation remains constant, almost all simulations assume that P V is zero, in which case the Gibbs and Helmholtz free energy changes are identical (this is readily derived from Eqs. (12.2) –(12.7)). When this is not the case, the additional contributions to G must be explicitly accounted for.

Of the three methods discussed above, FEP, TI, and slow growth, the first two see far more application than the third. The slow-growth condition, that the system is constantly at, or at least very, very near equilibrium, is quite hard to maintain over the course of a

Соседние файлы в предмете Химия