Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
ТвиМС.doc
Скачиваний:
2
Добавлен:
01.03.2025
Размер:
4.35 Mб
Скачать

9. Independence

In probability theory, to say that two events are independent intuitively means that the occurrence of one event makes it neither more nor less probable that the other occurs. For example:

  • The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time are independent.

  • By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trials is 8 are dependent.

  • If two cards are drawn with replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial areindependent.

  • By contrast, if two cards are drawn without replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are dependent.

Similarly, two random variables are independent if the conditional probability distribution of either given the observed value of the other is the same as if the other's value had not been observed. The concept of independence extends to dealing with collections of more than two events or random variables.

In some instances, the term "independent" is replaced by "statistically independent", "marginally independent", or "absolutely independent".

Independent events

The standard definition says:

Here A ∩ B is the intersection of A and B, that is, it is the event that both events A and B occur.

More generally, any collection of events—possibly more than just two of them—are mutually independent if and only if for every finite subset A1, ..., An of the collection we have

This is called the multiplication rule for independent events. Notice that independence requires this rule to hold for everysubset of the collection; see[2] for a three-event example in which   and yet no two of the three events are pairwise independent.

If two events A and B are independent, then the conditional probability of A given B is the same as the unconditional (or marginal) probability of A, that is,

There are at least two reasons why this statement is not taken to be the definition of independence: (1) the two events A and B do not play symmetrical roles in this statement, and (2) problems arise with this statement when events of probability 0 are involved.

The conditional probability of event A given B is given by

The statement above, when   is equivalent to

which is the standard definition given above.

Note that an event is independent of itself if and only if

That is, if its probability is one or zero. Thus if an event or itscomplement almost surely occurs, it is independent of itself. For example, if event A is choosing any number but 0.5 from auniform distribution on the unit interval, A is independent of itself, even though, tautologically, A fully determines A.

10. The Product Rule (Multiplication Formula of probabilities) .

1) A1, A2, . . . , An are independent. 2) A1, A2, . . . , An eny events.

The definition of conditional probability, P(A|B)=P(AB)/P(B), can be rewritten as P(AB)=P(A|B)P(B). This is the product rule.

Example: If P(A)=.5 andP(B|A)=.4, P(BA)=.4 × .5 =.2. (of course AB=BA).

Product rule for independent events

  • If A and B are independent, P(AB)=P(A)P(B) (because P(A|B)=P(A) for independent events). (Example: If A and B are independent and P(A)=.3 and P(B)=.6, then P(AB)=.3 × .6 = .18.)

  • N.B.: If A and B are disjoint (which includes the case where A and B are complementary)

    • P(AB)=0

    • P(A|B)=0=P(B|A)