Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
The law of large Numbers.doc
Скачиваний:
0
Добавлен:
01.07.2025
Размер:
124.93 Кб
Скачать

Weak law

The weak law of large numbers (also called Khintchine's law) states that the sample average converges in probability towards the expected value

That is to say that for any positive number ε,

Interpreting this result, the weak law essentially states that for any nonzero margin specified, no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value; that is, within the margin.

Convergence in probability is also called weak convergence of random variables. This version is called the weak law because random variables may converge weakly (in probability) as above without converging strongly (almost surely) as below.

Strong law

The strong law of large numbers states that the sample average converges almost surely to the expected value

That is,

The proof is more complex than that of the weak law. This law justifies the intuitive interpretation of the expected value (for Lebesgue integration only ) of a random variable when sampled repeatedly as the "long-term average".

Almost sure convergence is also called strong convergence of random variables. This version is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). The strong law implies the weak law.

The strong law of large numbers can itself be seen as a special case of the pointwise ergodic theorem.

Moreover, if the summands are independent but not identically distributed, then

provided that each Xk has a finite second moment and

This statement is known as Kolmogorov's strong law, see e.g. Sen & Singer (1993, Theorem 2.3.10).

Differences between the weak law and the strong law

The weak law states that for a specified large n, the average   is likely to be near μ. Thus, it leaves open the possibility that   happens an infinite number of times, although at infrequent intervals.

The strong law shows that this almost surely will not occur. In particular, it implies that with probability 1, we have that for any ε > 0 the inequality   holds for all large enough n.

The strong law does not hold in the following cases, but the weak law does 

1. Let x be exponentially distributed random variable with parameter 1, the transformation with the following expected value: 

2. Let x be geometric distribution with probability 0.5, the transformation with the following expected value: 

3. 

Uniform law of large numbers

Suppose f(x,θ) is some function defined for θ ∈ Θ, and continuous in θ. Then for any fixed θ, the sequence {f(X1,θ), f(X2,θ), …} will be a sequence of independent and identically distributed random variables, such that the sample mean of this sequence converges in probability to E[f(X,θ)]. This is the pointwise (in θ) convergence.

The uniform law of large numbers states the conditions under which the convergence happens uniformly in θ.

  1. Θ is compact,

  2. f(x,θ) is continuous at each θ ∈ Θ for almost all x’s, and measurable function of x at each θ.

  3. there exists a dominating function d(x) such that E[d(X)] < ∞, and

Then E[f(X,θ)] is continuous in θ, and

Borel's law of large numbers, named after Émile Borel, states that if an experiment is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified event occurs approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be. More precisely, if E denotes the event in question, p its probability of occurrence, and Nn(E) the number of times E occurs in the first n trials, then with probability one,

Chebyshev's Lemma. Let X be a random variable with finite expected value μ and finite non-zero variance σ2. Then for any real number k > 0,

This theorem makes rigorous the intuitive notion of probability as the long-run relative frequency of an event's occurrence. It is a special case of any of several more general laws of large numbers in probability theory.

Proof

Given X1X2, ... an infinite sequence of i.i.d. random variables with finite expected value E(X1) = E(X2) = ... = µ < ∞, we are interested in the convergence of the sample average

The weak law of large numbers states:

Theorem: 

Proof using Chebyshev's inequality

This proof uses the assumption of finite variance   (for all  ). The independence of the random variables implies no correlation between them, and we have that

The common mean μ of the sequence is the mean of the sample average:

Using Chebyshev's inequality on   results in

This may be used to obtain the following:

As n approaches infinity, the expression approaches 1. And by definition of convergence in probability, we have obtained

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]