
Cosmology. The Origin and Evolution of Cosmic Structure - Coles P., Lucchin F
..pdf
298 Nonlinear Evolution
We can take this formulation further and model the behaviour of the two-point correlation of the matter fluctuations. Let us divide the possible range of masses at time t0 into three intervals: (a) scales corresponding to masses still in the linear regime, i.e. those with tM > t0 or, equivalently, M > M(t0) = M0; (b) scales which have reached their radius of maximum expansion but have not yet reached virial equilibrium – for these scales t0 > tM > t0/3; and (c) scales which have reached virial equilibrium, i.e. those with tM < t0/3.
The relationship between M and r for scales in the first interval is just
M = 34 π[ρ0m + δρm(r)]r3 34 πρ0mr3, |
(14.4.9) |
while for the second and the third we have
M = 34 πρcM r3, |
(14.4.10) |
where ρcM is the density of the condensation of mass M which coincides with ρM given in (14.4.5) for those condensations already virialised. Because ρcM ρ for scales of interest in this context we have, from Section 13.7,
ξ(r) |
ρ |
(r) |
− 1 |
ρ |
(r) |
. |
|
|
|
c M |
|
|
cM |
(14.4.11) |
|||
|
ρ |
|
ρ |
For the scales which are still in the linear regime we have
ξ(r) σM2 r−(nrec+3). |
(14.4.12) |
From Equations (14.4.5) and (14.4.11) one can obtain, for the third interval,
ξ(r) (72χ − 1) |
r |
−γvir |
|
|
|
|
, |
(14.4.13) |
|
rvir |
where rvir is the scale which has just reached virial equilibrium and which corresponds to a mass scale Mvir.
In the second interval we cannot write an exact expression for ξ(r) for any value of r. For the scale rM0 , which has just reached maximum expansion, we have ξ(rM0 ) χ − 1. For scales rvir r rM0 one can introduce a covariance function which is approximated by a power law, by analogy with Equations (14.4.12) and (14.4.13), so that it matches the exact values at rvir and rM0 :
|
|
|
|
r |
−γ¯ |
|
r |
−γ¯ |
|
|
|
ξ(r) (72χ − 1) |
|
|
(χ − 1) |
|
|
, |
(14.4.14) |
||||
rvir |
rM0 |
||||||||||
with exponent γ¯ given by |
|
|
|
|
|
|
|
|
|
|
|
¯ |
= |
|
ln[(72χ − 1)/(χ − 1)] |
. |
|
|
|
(14.4.15) |
|||
ln(rM0 /rvir) |
|
|
|
||||||||
γ |
|
|
|
|
|

|
|
|
|
|
|
|
Self-similar Evolution |
299 |
||||
Let us recall that, from (14.4.3), we have |
|
|
|
|
|
|
||||||
|
|
|
|
|
t0 |
|
2/3αrec |
|
|
|
||
|
|
M0 = 34 πrM30 χρ0m = MJ |
|
|
, |
|
(14.4.16) |
|||||
|
tJ |
|||||||||||
|
|
|
|
|
|
|
|
2/3αrec |
|
|||
|
|
|
|
|
|
|
t0 |
|
|
|
||
|
|
Mvir = 34 πrvir3 72χρ0m = MJ |
|
|
|
, |
|
(14.4.17) |
||||
|
|
3tJ |
||||||||||
so that |
|
|
|
|
|
|
|
|
|
|
|
|
¯ |
|
3 ln[(72χ − 1)/(χ − 1)] |
|
|
|
3.18 |
|
|
(14.4.18) |
|||
= ln 72 + ln 81/(3 + nrec) |
1 + 1.03/(3 + nrec). |
|||||||||||
γ |
|
|||||||||||
One can show that for Ω ≠ 1 one has χ |
= π2/[4Ω(H0t0)2] instead of χ = (43 π)2 |
5.6; for Ω = 0.1, for example, this yields χ 30.6 and Equation (14.4.18) gives
γ = 3.03/[1 + 0.349/(3 + nrec)].
In this way, in the case Ω = 1, one obtains practically the complete behaviour of ξ(r) for a given nrec; the only part not covered is that in which χ − 1 5 ξ(r) 1, where the correlation function passes gradually between the behaviour described by Equations (14.4.12) and (14.4.14). In the case Ω = 0.1 the missing range is larger, χ −1 30 ξ(r) 1. In any case these results can probably only be interpreted meaningfully in the regime where ξ 1. It is interesting to note that, with a spectral index at recombination given by nrec 0, we have γvir 1.8.
14.4.2 Stable clustering
An alternative approach to self-similar evolution that makes a closer contact with dynamics of clustering evolution is to proceed from the power spectrum. Consider the behaviour of the linear power spectrum smoothed on a scale Rf; this is defined in Equation (13.3.12). At any time there will be a characteristic comoving scale R such that the spectrum smoothed on that scale has unit variance. If we assume a flat Friedmann model so that the linear density fluctuations grow as t2/3 and an initial power-law spectrum of the form P(k) = Akn, then this characteristic scale varies as
R (t) t4/(3n+9). |
(14.4.19) |
This, in turn corresponds to a characteristic mass scale M that varies as
M t4/(n+3). |
(14.4.20) |
The assumption that there is self-similar evolution corresponds to the assumption that the two-point correlation function in the nonlinear regime ξ(x, t) is a function of a single similarity variable s = x/tα, where the value of α is fixed by Equation (14.4.19) if the nonlinear behaviour matches onto the growth in the linear regime.
300 Nonlinear Evolution
This idea can be connected with the behaviour of velocities by writing an equation for the conservation of pairs of particles:
∂ξ(x, t) |
+ |
1 ∂ |
[x2 v21(x, t) (1 |
+ ξ(x, t))] = 0 |
(14.4.21) |
||
∂t |
ax2 |
|
∂x |
where v21(x, t) is the mean relative velocity of particles with separation x at time t (Davis and Peebles 1977; Peebles 1980). Under the similarity transformation mentioned above this equation assumes the form
−αs |
dξ |
+ |
1 d |
[s2 v21(s)/atα−1 (1 |
+ ξ)] = 0. |
(14.4.22) |
||
ds |
s2 |
|
ds |
Now for very small separations it seems to be a reasonable ansatz to assume the clumps of matter are stable so that on average there is no net change in separation, i.e.
r˙12 = a˙x12 + a x˙12 = 0. |
(14.4.23) |
This is called the stable clustering limit. Putting (14.4.23) into (14.4.22) and solving for ξ yields
ξ(s) s−γ, |
(14.4.24) |
where γ turns out to be the same as γvir given in (14.4.6).
14.4.3 Scaling of the power spectrum
The idea that some form of self-similarity might apply to the evolution of clustering into the nonlinear regime led Hamilton et al. (1991) to construct an ingenious model for how the power spectrum itself might evolve. In the linear regime P(k) retains its initial shape, once clustering becomes strong its shape will change.
The basic idea is as follows. Let r0 be a Lagrangian comoving coordinate defined by
r |
|
|
r03 = 0 |
(1 + ξ) d3r = r3(1 + ξ),¯ |
(14.4.25) |
¯
where ξ is the mean correlation function interior to some radius r. The Lagrangian radius r0 can be thought of as the size of a patch of the initial conditions that collapses to a size r when the structure goes nonlinear. At early times r and r0
¯ |
1 |
coincide but as time passes r shrinks relative to r0. In the linear regime ξ |
simply grows as the square of the linear growth law, i.e. if Ω0 = 1 it grows as t4/3
or, alternatively, as a |
2 |
|
|
¯ |
1, then the |
|
|
|
. If there is a stable clustering regime for ξ |
||||
¯ |
a |
3 |
since the structures are fixed in physical coordinates. |
|||
growth law must be ξ |
|
|
These two limits motivate the suggestion that, anywhere between the two lim-
|
|
|
|
¯ |
|
iting cases of linear and stable clustering, the evolution of ξ might be described |
|||||
|
|
|
|
¯ |
(r0) and a, i.e. |
by a kind of universal function of the initial mean correlation ξ0 |
|||||
¯ |
= F[a |
2 |
¯ |
(r0)], |
(14.4.26) |
ξ |
|
ξ0 |
The Mass Function |
301 |
where F[x] is unity for small x and proportional to x3/2 for large x. Hamilton et al. (1991) compare this idea with the results of full numerical computations. They find that it works reasonably well, and provide a fitting formula for F that works in the intermediate regime. A subsequent study by Jain et al. (1995) refined and extended this approach.
14.4.4 Comments
Although this analysis is very simplified, it does give results which agree, at least qualitatively, with full N-body simulations of hierarchical clustering. It is possible to extend the ideas of self-similarity further, to the analysis of higher-order correlations. Although this latter approach yields what is called the hierarchical model for reduced N-point correlation functions, which is described in Section 16.4, this should not be thought of as a logical consequence of the highly approximate model we have described in this section. This general picture of self-similar clustering is also the motivation behind attempts to calculate the mass function of condensed objects, which we describe in the next section.
14.5 The Mass Function
The mass function n(M), also called the multiplicity function, of cosmic structures such as galaxies is defined by the relation
dN = n(M) dM, |
(14.5.1) |
which gives the number of the structures in question per unit volume with mass contained in the interval between M and M +dM. It is clear that the mass function and the luminosity function, defined in Section 4.5, contain the same information as long as one knows the value of the ratio M/L for the objects because
Φ(L) = n(M) |
dM |
n(M) |
M |
. |
(14.5.2) |
dL |
L |
This ratio, as we have mentioned in Chapter 4, is not known with any great certainty: for example, it seems to have values of order 10, 100 and 400 in solar units for galaxies, groups of galaxies and clusters, respectively. It is in practice impossible to recover the mass function from the observed luminosity function. On the other hand, in many cosmological problems, above all in those involving counts of objects at various distances, it is important to have an analytic expression for the mass function. This must therefore be calculated by some appropriate theoretical model. For this reason, Press and Schechter (1974) proposed a simple analytical model to calculate n(M). This method is still used today and, despite simplicity and several obvious shortcomings, is still the most reliable method available for calculating this function analytically.
302 Nonlinear Evolution
In the Press–Schechter approach one considers a density fluctuation field δ(x; R) ≡ δM, filtered on a spatial scale R corresponding to a mass M. In particular, if the density field possesses Gaussian statistics (see Section 13.7), the distribution of fluctuations is given by
P(δM) dδM = |
1 |
exp − |
δM2 |
dδM. |
(14.5.3) |
(2πσM2 )1/2 |
2σM2 |
The probability that at some point the fluctuation δM exceeds some critical value δc is expressed by the relation
∞ |
|
|
P>δc (M) = δc |
P(δM) dδM; |
(14.5.4) |
this quantity depends on the filter mass M and, through the time-dependence of σM , on the redshift (or epoch). The probability P>δc is also proportional to the number of cosmic structures characterised by a density perturbation greater than δc, whether these are isolated or contained within denser structures which collapse with them. For example, in the spherical collapse approximation of Section 14.1, the value δc 1.68, obtained by extrapolating linear theory, represents structures which, having passed the phase of maximum expansion, have collapsed and reached their maximum density. To find the number of regions with mass M which are isolated, in other words surrounded by underdense regions, one must subtract from P>δc (M) the quantity P>δc (M + dM), proportional to the number of objects entering the nonlinear regime characterised by δc on the appropriate mass scale. In making this assumption we have completely ignored the so-called cloud-in-cloud problem, which is the possibility that at a given instant some object, which is nonlinear on a scale M, can be later contained within another object, on a larger mass scale. It is necessary e ectively to take the probability in Equation (14.5.4) to be proportional to the probability that a given point has ever been contained in a collapsed object on some scale greater than M or, in other words, that the only objects which exist on a given scale are those which have just collapsed. If an object has δ > δc when smoothed on a scale R, it will have δ = δc when smoothed on some larger scale R and will therefore be counted again as part of a higher level of the hierarchy. Another problem of this assumption is also obvious: it cannot treat underdense regions properly and therefore, by symmetry, half the mass is not accounted for. In the Press–Schechter analysis this is corrected by multiplying throughout by a factor 2, with the vague understanding that this represents accretion from the underdense regions onto the dense ones. The result is therefore that
n(M)M dM = 2ρm[P>δc (M)−P>δc |
|
dP>δc |
|
dσM |
dM. (14.5.5) |
(M +dM)] = 2ρm dσM |
dM |
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The formula (14.5.5) becomes very simple in the case where the RMS mass fluctuation is expressed by a power law:
|
M |
−α |
|
σM = |
|
|
(14.5.6) |
M0 |

The Mass Function |
303 |
(the preceding relation is also approximately valid if one does not have a pure power law but if α is interpreted as the e ective index over the mass scale of interest). In this case we obtain, from Equations (14.5.3), (14.5.4) and (14.5.5), that
n(M) |
|
|
|
|
|
|
|
2 exp |
|
c2 |
|
2 |
|
M |
α−2 |
exp |
|
|
2α |
. |
||||
|
|
|
2 δcα ρm |
− |
δ2 |
|
2 ρmα |
|
|
|
− |
M |
|
|||||||||||
|
= |
|
|
|
|
|
|
|
|
|
|
= √ |
|
|
|
|
|
|
|
|
|
|
||
|
|
π σM M |
|
|
2σM |
M |
M |
|
M |
|||||||||||||||
|
|
|
|
π |
|
(14.5.7)
The mass function thus has a power-law behaviour with an exponential cut-o at the scale
M = |
2 |
1/2α |
|
|
|
|
M0. |
(14.5.8) |
|
δc2 |
It is interesting to note that, for a constant value of the ratio M/L in Equations (14.5.2) and (14.5.7), one can obtain a functional form for the luminosity function Φ(L) similar to that of the Schechter function introduced in Chapter 4; to match exactly requires α = 12 , in other words a white-noise spectrum.
From Equation (14.5.7) it is also possible to derive the time-evolution of an appropriately defined characteristic mass Mc(t). In the kinetic theory of fragmentation and coagulation, one often assumes
∞ n(M; t)M2 dM
Mc(t) = 0∞ ; (14.5.9)
0 n(M; t)M dM
the time-dependence comes from the evolution of σM . In the simplest case in which σM is given by Equation (14.5.6) and is growing in the linear regime one finds that, in an Einstein–de Sitter universe,
1/2 |
|
1 + α |
|
t |
2/3α |
(14.5.10) |
|
Γ |
2α |
M (t0) t0 |
|
||||
Mc(t) = π− |
|
(Γ is the Gamma function), in accordance with Equation (14.4.3), as one would expect.
The Press–Schechter theory has been very successful and influential because it seems to describe rather well the behaviour of N-body simulations. Nevertheless, there are various assumptions made in this analysis which are extremely hard to justify. First there is the assumption that bound structures essentially form at peaks of the linear density field. While this must be some approximation to the real state of a airs, it can hardly be exact, because matter moved significantly from its initial Lagrangian position during nonlinear evolution as clearly demonstrated by the Zel’dovich approximation. In fact, the problem here is that the Press–Schechter approach does not really deal with localised objects at all but is merely a recipe for labelling points in the primordial density field. It is also quite clear that the device of multiplying the probability (14.5.4) by a factor 2 to obtain Equation (14.5.6) cannot be justified. Some more sophisticated analyses, intended to tackle the cloud-in-cloud problem explicitly, have clarified aspects of the problem. In particular, recent studies have elucidated the real nature of the

304 Nonlinear Evolution
Figure 14.2 Example of a merger tree. The trunk of the tree represents the final mass of a halo and the branches show the various progenitors, with thickness representing the mass of the merging object. Picture courtesy of Sean Cole.
factor 2 as an artefact of overcounting due to cloud-in-cloud e ects (Bond et al. 1991).
The Press–Schechter model, despite all its failings, is well verified by comparison with N-body simulations and is therefore a useful predictive tool in many circumstances. Its greatest failing however is that it is inherently statistical: mass points are merely labels and no attempt is made to follow the detailed evolution of individual objects. To put this another way, two objects with the same mass M at some time t may have built up through an entirely di erent series of mergers of smaller objects, sometimes through dramatic encounters of two objects with roughly equal masses, and sometimes through one object steadily consuming much smaller ones. It is likely that these di erent merger histories give rise to di erent kinds of object. This approach, pioneered by Lacey and Cole (1993) is illustrated in Figure 14.2.
14.6 N-Body Simulations
The complexity of the physical behaviour of fluctuations in the nonlinear regime makes it impossible to study the details exactly using analytical methods. The methods we have described in Sections 15.1–15.5 are valuable for providing us with a physical understanding of the processes involved, but they do not allow us to make very detailed predictions to test against observations. For this task one must resort to numerical simulation methods.
It is possible to represent part of the expanding Universe as a ‘box’ containing a large number N of point masses interacting through their mutual gravity. This

N-Body Simulations |
305 |
box, typically a cube, must be at least as large as the scale at which the Universe becomes homogeneous if it is to provide a ‘fair sample’ which is representative of the Universe as a whole. It is common practice to take the cube as having periodic boundary conditions in all directions, which also assists in some of the computational techniques by allowing Fourier methods to be employed in summing the N-body forces. A number of numerical techniques are available at the present time; they di er, for the most part, only in the way the forces on each particle are calculated. We describe some of the most popular methods here.
14.6.1 Direct summation
The simplest way to compute the nonlinear evolution of a cosmological fluid is to represent it as a discrete set of particles, and then sum the (pairwise) interactions between them directly to calculate the Newtonian forces, as mentioned above. Such calculations are often called particle–particle, or PP, computations. With the adoption of a (small) timestep, one can use the resulting acceleration to update the particle velocity and then its position. New positions can then be used to recalculate the interparticle forces, and so on.
One should note at the outset that these techniques are not intended to represent the motion of a discrete set of particles. The particle configuration is itself an approximation to a fluid. There is also a numerical problem with summation of the forces: the Newtonian gravitational force between two particles increases as the particles approach each other and it is therefore necessary to choose an extremely small timestep to resolve the large velocity changes this induces. A very small timestep would require the consumption of enormous amounts of CPU time and, in any case, computers cannot handle the formally divergent force terms when the particles are arbitrarily close to each other. One usually avoids these problems by treating each particle not as a point mass, but as an extended body. The practical upshot of this is that one modifies the Newtonian force between particles by putting
Fij = |
Gm2(xj − xi) |
, |
(14.6.1) |
(H2 + |xi − xj|2)3/2 |
where the particles are at positions xi and xj and they all have the same mass m; the form of this equation avoids infinite forces at zero separations. The parameter H in Equation (14.6.1) is usually called the softening length and it acts to suppress two-body forces on small scales. This is equivalent to replacing point masses by extended bodies with a size of order H. Since we are not supposed to be dealing with the behaviour of a set of point masses anyway, the introduction of a softening length is quite reasonable but it means one cannot trust the distribution of matter on scales of order H or less.
If we suppose our simulation contains N particles, then the direct summation of all the (N−1) interactions to compute the acceleration of each particle requires a total of N(N − 1)/2 evaluations of (14.6.1) at each timestep. This is the crucial limitation of these methods: they tend to be very slow, with the computational
306 Nonlinear Evolution
time required scaling roughly as N2. The maximum number of particles for which it is practical to use direct summation is of order 104, which is not su cient for realistic simulations of large-scale structure formation.
14.6.2 Particle–mesh techniques
The usual method for improving upon direct N-body summation for computing inter-particle forces is some form of ‘particle–mesh’ (PM) scheme. In this scheme the forces are solved by assigning mass points to a regular grid and then solving Poisson’s equation on it. The use of a regular grid with periodic boundary conditions allows one to use Fast Fourier Transform (FFT) methods to recover the potential, which leads to a considerable increase in speed. The basic steps in a PM calculation are as follows.
In the following, n is a vector representing a grid position (the three components of n are integers); xi is the location of the ith particle in the simulation volume; for simplicity we adopt a notation such that the Newtonian gravitational constant G ≡ 1, the length of the side of the simulation cube is unity and the total mass is also unity; M will be the number of mesh-cells along one side of the simulation cube, the total number of cells being N; the vector q is n/M. First we calculate the density on the grid:
|
M3 |
N |
|
ρ(q) = |
|
i 1 W(xi − q), |
(14.6.2) |
N |
|||
|
|
= |
|
where W defines a weighting scheme designed to assign mass to the mesh. We then calculate the potential by summing over the mesh
1 |
|
G(q − q )ρ(q ) |
|
|
ϕ(q) = |
|
(14.6.3) |
||
M3 |
q |
|||
|
|
|
|
(where G is an appropriate Green’s function for the Poisson equation), compute the resulting forces at the grid points,
1 |
Dϕ, |
|
|
F(q) = − |
|
(14.6.4) |
|
N |
|||
and then interpolate to find the forces on each particle, |
|
||
F(xi) = W(xi − q)F(q). |
(14.6.5) |
||
q |
|
||
|
|
In Equation (14.6.4), D is a finite di erencing scheme used to derive the forces from the potential. We shall not go into the various possible choices of weighting function W in this brief treatment: possibilities include ‘nearest gridpoint’ (NGP), ‘cloud-in-cell’ (CIC) and ‘triangular-shaped clouds’ (TSC).
We have written the computation of ϕ as a convolution but the most important advantage of the PM method is that it allows a much faster calculation of the

N-Body Simulations |
307 |
potential than this. The usual approach is to Fourier transform the density field ρ, which allows the transform of ϕ to be expressed as a product of transforms of the two terms in (14.6.3) rather than a convolution; the periodic boundary conditions allow FFTs to be used to transform backwards and forwards, and this saves a considerable amount of computer time. The potential on the grid is thus written
|
|
π |
|
|
ϕ(l,m,n) = p,q,r Gˆ(p,q,r)ρ(p,q,r)ˆ |
exp i |
|
(pl + qm + rn) , |
(14.6.6) |
M |
where the ‘hats’ denote Fourier transforms of the relevant mesh quantities. There are di erent possibilities for the transformed Green’s function Gˆ, the most straightforward being simply
ˆ |
−1 |
, |
(14.6.7) |
|
π(p2 + q2 + r2) |
||||
G(p,q,r) = |
|
unless p = q = r = 0, in which case Gˆ = 0. Equation (14.6.6) represents a sum, rather than the convolution in Equation (14.6.3), and its evaluation can therefore be performed much more quickly. The calculation of the forces in Equation (14.6.5) can also be speeded up by computing them in Fourier space. An FFT is basically of order N log N in the number of grid points and this represents a substantial improvement for large N over the direct particle–particle summation technique. The price to be paid for this is that the Fourier summation method implicitly requires that the simulation box has periodic boundary conditions: this is probably the most reasonable choice for simulating a ‘representative’ part of the Universe, so this does not seem to be too high a price.
The potential weakness of this method is the comparatively poor force resolution on small scales because of the finite spatial size of the mesh. A substantial increase in spatial resolution can be achieved by using instead a hybrid ‘particle– particle–particle–mesh’ method, which solves the short range forces directly (PP) but uses the mesh to compute those of longer range (PM); hence PP + PM = P3M, the usual name of such codes. Here, the short-range resolution of the algorithm is improved by adding a correction to the mesh force. This contribution is obtained by summing directly all the forces from neighbours within some fixed distance rs of each particle. A typical choice for rs will be around three grid units. Alternatively, one can use a modified force law on these small scales to assign any particular density profile to the particles, similar to the softening procedure demonstrated in Equation (14.6.1). This part of the force calculation may well be quite slow, so it is advantageous merely to calculate the short-range force at the start for a large number of points spaced linearly in radius, and then find the actual force by simple interpolation. The long-range part of the force calculation is done by a variant of the PM method described earlier.
Variants of the PM and P3M technique are now the standard workhorses for cosmological clustering studies. Di erent workers have slightly di erent interpolation schemes and choices of softening length. Whether one should use PM