
- •Ministry of Education and Science of Ukraine
- •V. N. Pavlysh
- •1. Sets, sequences and functions
- •1.1 Some Special Sets
- •Exercises 1.1
- •1.2 Set Operations
- •Note the use of the “exclusive or” here. It follows from the definition that
- •Figure 1
- •Figure 2
- •Figure 3
- •Figure 4
- •Figure 5
- •Figure 7
- •Figure 7
- •Exercises 1.2
- •1.3 Functions
- •Figure 1
- •Figure 2
- •Figure 3
- •Figure 4
- •Figure 5
- •1.4 Inverses of Functions
- •Figure3
- •Sequences
- •Value of n The sum
- •Figure 1 example 4 (a) We will be interested in comparing the growth rates of familiar
- •Example 6 (a) At the beginning of this section we mentioned general sums
- •Figure 3
- •Figure 4
- •Figure 1
Figure 3
14. Repeat Exercise 13 for the table in Figure 4.
n
|
log10 n
|
|
|
|
50
|
1.70
|
7.07
|
53.18
|
4.52
|
100
|
|
|
|
|
104
|
|
|
|
|
106
|
|
|
|
|
Figure 4
15. Note the exponents that appear in the 2n column of Figure 2. Note also that log10 2 .30103. What's going on here?
1.6 Big-Oh Notation
One way that sequences arise naturally in computer science is as lists of successive computed values; the sequences fact and two in Example 2 of 1.5 are of this sort. Another important application of sequences, especially later when we analyze algorithms, is to the problem of estimating how long a computation will take for a given input.
For example, think of sorting a list of n given integers into increasing order. There are lots of algorithms available for doing this job; you can probably think of several different methods yourself. Some algorithms are faster than others, and all of them take more time as n gets larger. If n is small, it probably doesn’t make much difference which method we choose, but for large values of n a good choice may lead to a substantial saving in time. We need a way to describe the time behavior of our algorithms.
In our sorting example, say the sequence t measures the time a particular algorithm takes, so t(n) is the time to sort a list of length n. On a faster computer we might cut all the values of t(n) by a factor of 2 or 100 or even 1000. But then all of our algorithms would get faster, too. For choosing between methods, what really matters is some measure, not of the absolute size of t(n), but of the rate at which t(n) grows as n gets large. Does it grow like 2n, n2, n or log2 n, or like some other function of n that we looked at in 1.5?
The main point of this section is to develop notation to describe rates of growth. Before we do so, we study more closely the relationships among familiar sequences such as log2 n, , n, n2 and 2n.
example 1 (a) For all positive integers n we have
.
Of course, other exponents of n can be inserted into this string of
inequalities. For example,
for
all n;
recall
that
.
(b) We have n < 2n for all n N. Actually, we have n 2n-1 for all n;
this is clear for small values of n like 1, 2 and 3. In general,
.
There are n — 1 factors on the right and each of them is bounded by 2, so
n 2n-1.
<
(c)
n2
2n
for n
1. This is easily checked for n
= 1, 2, 3 and 4.
Note that we get equality with n = 3. For n > 4 observe that
.
Each
factor on the right except the first one is at most
,
and there are
n - 4 such factors. Since = 1.5625 < 2,
if
n
>
4. ■
EXAMPLE 2 (a) Since n 2n-1 for all n in N, we have
log2 n log2 2n-1 = n - 1 for n 1.
We can use this fact to show that
log2 x < x for all real numbers x >0;
see Figure 1. Indeed, if n is the least integer bigger than x, then we have
n - 1 x < n, so log2 x < log2 n n - 1 x.