
- •2 The classical definition of probability.
- •3)Geometric definition of probability. The problem of meeting. Geometric Probability
- •4)Elements of the combinatory
- •5) A permutation. The number of permutations of n objects.
- •Example
- •6)A combination. The number of combinations of n distinct objects taken r at a time.
- •An Example of Combinations
- •7. Additive Rules (Addition formula of probability). 1) a1, a2, . . . , An are mutually exclusive. 2) a1, a2, . . . , An eny events.
- •8. Conditional probability.
- •9. Independence
- •Independent events
- •11. The Theorem of total probability.
- •12. Bayes’ Rule.
- •13. Bernoulli scheme. Bernoulli distribution.
- •14. Poisson approximation formula.
- •15. The Local Moivre-Laplace’s theorem
- •18. Independence of random variables.
- •21. Discrete random variable
- •22. Discrete Probability Distributions. Probability Density function.
- •23. Discrete Probability Distributions
- •24. Continuous distribution function
- •25. Continuous distribution function
- •Example
- •27. Joint Density Function
- •28. Conditional distribution
- •29. Statistical Independence
- •Independent events
- •30. Mathematical expectation
- •31. Mathematical expectation to the case of two random variables
- •32. Variance of random variables
- •33. Standard deviation.
- •35. Covariance of Random Variables
- •36. The correlation coefficient.
- •37. Means and Variances of Linear Combinations of Random Variables. We are still working towards finding the theoretical mean and variance of the sample mean:
- •Example
- •38. Chebyshev’s Theorem.
- •Example
- •39. Some Discrete Probability Distributions. Binomial and Multinomial Distributions.
- •40. Geometric Distribution.
- •41. Poisson Distribution.
- •42. Continuous Uniform Distribution. Normal Distribution.
- •43. Exponential Distributions.
- •44. Moments and Moment-Generating Functions.
- •45. Populations and Samples. Some Important Statistics.
- •46. Location Measures of a Sample: The Sample Mean, Median, and Mode.
- •1.Sample Mean.
- •Note that the statistics X(отрицание) assumes the value
- •47. The Sample Variance, Standard Deviation and Range.
- •48. The Central Limit Theorem.
- •49. The Likelihood Function.
- •50. Point estimate.
- •51. Estimating the Mean.
- •53. Single Sample: Estimating the Variance.
- •54. Sampling Distribution of s2.
- •55. Statistical Hypotheses: General Concepts.
- •56. Prove the formula of Poisson distribution:
9. Independence
In probability theory, to say that two events are independent intuitively means that the occurrence of one event makes it neither more nor less probable that the other occurs. For example:
The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time are independent.
By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trials is 8 are dependent.
If two cards are drawn with replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial areindependent.
By contrast, if two cards are drawn without replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are dependent.
Similarly, two random variables are independent if the conditional probability distribution of either given the observed value of the other is the same as if the other's value had not been observed. The concept of independence extends to dealing with collections of more than two events or random variables.
In some instances, the term "independent" is replaced by "statistically independent", "marginally independent", or "absolutely independent".
Independent events
The standard definition says:
Here A ∩ B is the intersection of A and B, that is, it is the event that both events A and B occur.
More generally, any collection of events—possibly more than just two of them—are mutually independent if and only if for every finite subset A1, ..., An of the collection we have
This is
called the multiplication
rule for
independent events. Notice that independence requires this rule to
hold for everysubset
of the collection; see[2] for
a three-event example in which
and
yet no two of the three events are pairwise independent.
If two events A and B are independent, then the conditional probability of A given B is the same as the unconditional (or marginal) probability of A, that is,
There are at least two reasons why this statement is not taken to be the definition of independence: (1) the two events A and B do not play symmetrical roles in this statement, and (2) problems arise with this statement when events of probability 0 are involved.
The conditional probability of event A given B is given by
The
statement above, when
is
equivalent to
which is the standard definition given above.
Note that an event is independent of itself if and only if
That is, if its probability is one or zero. Thus if an event or itscomplement almost surely occurs, it is independent of itself. For example, if event A is choosing any number but 0.5 from auniform distribution on the unit interval, A is independent of itself, even though, tautologically, A fully determines A.
10. The Product Rule (Multiplication Formula of probabilities) .
1) A1, A2, . . . , An are independent. 2) A1, A2, . . . , An eny events.
The definition of conditional probability, P(A|B)=P(AB)/P(B), can be rewritten as P(AB)=P(A|B)P(B). This is the product rule.
Example: If P(A)=.5 andP(B|A)=.4, P(BA)=.4 × .5 =.2. (of course AB=BA).
Product rule for independent events
If A and B are independent, P(AB)=P(A)P(B) (because P(A|B)=P(A) for independent events). (Example: If A and B are independent and P(A)=.3 and P(B)=.6, then P(AB)=.3 × .6 = .18.)
N.B.: If A and B are disjoint (which includes the case where A and B are complementary)
P(AB)=0
P(A|B)=0=P(B|A)