Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
R in Action, Second Edition.pdf
Скачиваний:
546
Добавлен:
26.03.2016
Размер:
20.33 Mб
Скачать

378

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

CHAPTER 16

Cluster analysis

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Average-Linkage Clustering

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

5 Cluster Solution

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

5

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

3

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Height

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

sardines canned

 

clams raw

clams canned

 

beef heart

 

beef roast

lamb shoulder roast

beef steak

 

beef braised

 

 

smoked ham

pork roast pork simmered

 

mackerel canned salmon canned mackerel broiled perch fried

crabmeat canned haddock fried beef canned veal cutlet beef tongue

hamburger

lamb leg roast

shrimp canned

chicken canned

 

tuna canned

chicken broiled

bluefish baked

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

d

hclust (*, "average")

Figure 16.3 Averagelinkage clustering of the nutrient data with a fivecluster solution

dendrogram is replotted, and the rect.hclust() function is used to superimpose the five-cluster solution d. The results are displayed in figure 16.3.

Sardines form their own cluster and are much higher in calcium than the other food groups. Beef heart is also a singleton and is high in protein and iron. The clam cluster is low in protein and high in iron. The items in the cluster containing beef roast to pork simmered are high in energy and fat. Finally, the largest group (mackerel to bluefish) is relatively low in iron.

Hierarchical clustering can be particularly useful when you expect nested clustering and a meaningful hierarchy. This is often the case in the biological sciences. But the hierarchical algorithms are greedy in the sense that once an observation is assigned to a cluster, it can’t be reassigned later in the process. Additionally, hierarchical clustering is difficult to apply in large samples, where there may be hundreds or even thousands of observations. Partitioning methods can work well in these situations.

16.4 Partitioning cluster analysis

In the partitioning approach, observations are divided into K groups and reshuffled to form the most cohesive clusters possible according to a given criterion. This section considers two methods: k-means and partitioning around medoids (PAM).

16.4.1K-means clustering

The most common partitioning method is the k-means cluster analysis. Conceptually, the k-means algorithm is as follows:

Partitioning cluster analysis

379

1Select K centroids (K rows chosen at random).

2Assign each data point to its closest centroid.

3Recalculate the centroids as the average of all data points in a cluster (that is, the centroids are p-length mean vectors, where p is the number of variables).

4Assign data points to their closest centroids.

5Continue steps 3 and 4 until the observations aren’t reassigned or the maximum number of iterations (R uses 10 as a default) is reached.

Implementation details for this approach can vary.

R uses an efficient algorithm by Hartigan and Wong (1979) that partitions the observations into k groups such that the sum of squares of the observations to their assigned cluster centers is a minimum. This means, in steps 2 and 4, each observation is assigned to the cluster with the smallest value of

n

p

2

ss(k) =

 

 

(xij xkj)

i =1 j = 0

where k is the cluster, xij is the value of the jth variable for the ith observation, x¯kj is the mean of the jth variable for the kth cluster, and p is the number of variables.

K-means clustering can handle larger datasets than hierarchical cluster approaches. Additionally, observations aren’t permanently committed to a cluster. They’re moved when doing so improves the overall solution. But the use of means implies that all variables must be continuous, and the approach can be severely affected by outliers. It also performs poorly in the presence of non-convex (for example, U-shaped) clusters.

The format of the k-means function in R is kmeans(x, centers), where x is a numeric dataset (matrix or data frame) and centers is the number of clusters to extract. The function returns the cluster memberships, centroids, sums of squares (within, between, total), and cluster sizes.

Because k-means cluster analysis starts with k randomly chosen centroids, a different solution can be obtained each time the function is invoked. Use the set.seed() function to guarantee that the results are reproducible. Additionally, this clustering approach can be sensitive to the initial selection of centroids. The kmeans() function has an nstart option that attempts multiple initial configurations and reports on the best one. For example, adding nstart=25 generates 25 initial configurations. This approach is often recommended.

Unlike hierarchical clustering, k-means clustering requires that you specify in advance the number of clusters to extract. Again, the NbClust package can be used as a guide. Additionally, a plot of the total within-groups sums of squares against the number of clusters in a k-means solution can be helpful. A bend in the graph (similar to the bend in the Scree test described in section 14.2.1) can suggest the appropriate number of clusters.

The graph can be produced with the following function:

wssplot <- function(data, nc=15, seed=1234){

wss <- (nrow(data)-1)*sum(apply(data,2,var))

380

CHAPTER 16 Cluster analysis

for (i in 2:nc){ set.seed(seed)

wss[i] <- sum(kmeans(data, centers=i)$withinss)} plot(1:nc, wss, type="b", xlab="Number of Clusters",

ylab="Within groups sum of squares")}

The data parameter is the numeric dataset to be analyzed, nc is the maximum number of clusters to consider, and seed is a random-number seed.

Let’s apply k-means clustering to a dataset containing 13 chemical measurements on 178 Italian wine samples. The data originally come from the UCI Machine Learning Repository (www.ics.uci.edu/~mlearn/MLRepository.html), but you’ll access them here via the rattle package. In this dataset, the observations represent three wine varietals, as indicated by the first variable (Type). You’ll drop this variable, perform the cluster analysis, and see if you can recover the known structure.

Listing 16.4 K-means clustering of wine data

>data(wine, package="rattle")

>head(wine)

 

Type Alcohol Malic

Ash Alcalinity Magnesium Phenols Flavanoids

1

1

14.23

1.71

2.43

15.6

127

2.80

3.06

2

1

13.20

1.78

2.14

11.2

100

2.65

2.76

3

1

13.16

2.36

2.67

18.6

101

2.80

3.24

4

1

14.37

1.95

2.50

16.8

113

3.85

3.49

5

1

13.24

2.59

2.87

21.0

118

2.80

2.69

6

1

14.20

1.76

2.45

15.2

112

3.27

3.39

> df <- scale(wine[-1])

 

 

 

 

 

 

Nonflavanoids Proanthocyanins Color

Hue Dilution Proline

1

0.28

2.29

5.64

1.04

3.92

1065

2

0.26

1.28

4.38

1.05

3.40

1050

3

0.30

2.81

5.68

1.03

3.17

1185

4

0.24

2.18

7.80

0.86

3.45

1480

5

0.39

1.82

4.32

1.04

2.93

735

6

0.34

1.97

6.75

1.05

2.85

1450

>wssplot(df)

>library(NbClust)

>set.seed(1234)

>devAskNewPage(ask=TRUE)

>nc <- NbClust(df, min.nc=2, max.nc=15, method="kmeans")

>table(nc$Best.n[1,])

Standardizes b the data

cDetermines the number of clusters

0

2

3

8

13

14

15

2

3

14

1

2

1

1

> barplot(table(nc$Best.n[1,]),

xlab="Number of Clusters", ylab="Number of Criteria", main="Number of Clusters Chosen by 26 Criteria")

> set.seed(1234)

Partitioning cluster analysis

>fit.km <- kmeans(df, 3, nstart=25)

>fit.km$size

[1] 62 65 51

> fit.km$centers

381

Performs the k-means d cluster analysis

 

Alcohol Malic

Ash Alcalinity Magnesium Phenols Flavanoids Nonflavanoids

1

0.83

-0.30

0.36

-0.61

0.576

0.883

0.975

-0.561

2

-0.92

-0.39

-0.49

0.17

-0.490

-0.076

0.021

-0.033

3

0.16

0.87

0.19

0.52

-0.075

-0.977

-1.212

0.724

 

Proanthocyanins Color

Hue Dilution Proline

1

0.579

0.17

0.47

0.78

1.12

2

0.058

-0.90

0.46

0.27

-0.75

3

-0.778

0.94

-1.16

-1.29

-0.41

> aggregate(wine[-1], by=list(cluster=fit.km$cluster), mean)

 

cluster Alcohol Malic Ash Alcalinity Magnesium Phenols Flavanoids

1

1

14

1.8

2.4

17

106

2.8

3.0

2

2

12

1.6

2.2

20

88

2.2

2.0

3

3

13

3.3

2.4

21

97

1.6

0.7

 

Nonflavanoids Proanthocyanins Color

Hue Dilution Proline

1

0.29

1.9

5.4

1.07

3.2

1072

2

0.35

1.6

2.9

1.04

2.8

495

3

0.47

1.1

7.3

0.67

1.7

620

Because the variables vary in range, they’re standardized prior to clustering b. Next, the number of clusters is determined using the wssplot() and NbClust() functions c. Figure 16.4 indicates that there is a distinct drop in the within-groups sum of squares when moving from one to three clusters. After three clusters, this decrease drops off, suggesting that a three-cluster solution may be a good fit to the data. In figure 16.5, 14 of 24 criteria provided by the NbClust package suggest a three-cluster solution. Note that not all 30 criteria can be calculated for every dataset.

A final cluster solution is obtained with the kmeans() function, and the cluster centroids are printed d. Because the centroids provided by the function are based on

squares

2000

groups sum of

1500

Within

1000

2

4

6

8

10

12

14

Number of Clusters

Figure 16.4 Plotting the within-groups sums of squares vs. the number of clusters extracted. The sharp decreases from one to three clusters (with little decrease after) suggests a three-cluster solution.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]