- •brief contents
- •contents
- •preface
- •acknowledgments
- •about this book
- •What’s new in the second edition
- •Who should read this book
- •Roadmap
- •Advice for data miners
- •Code examples
- •Code conventions
- •Author Online
- •About the author
- •about the cover illustration
- •1 Introduction to R
- •1.2 Obtaining and installing R
- •1.3 Working with R
- •1.3.1 Getting started
- •1.3.2 Getting help
- •1.3.3 The workspace
- •1.3.4 Input and output
- •1.4 Packages
- •1.4.1 What are packages?
- •1.4.2 Installing a package
- •1.4.3 Loading a package
- •1.4.4 Learning about a package
- •1.5 Batch processing
- •1.6 Using output as input: reusing results
- •1.7 Working with large datasets
- •1.8 Working through an example
- •1.9 Summary
- •2 Creating a dataset
- •2.1 Understanding datasets
- •2.2 Data structures
- •2.2.1 Vectors
- •2.2.2 Matrices
- •2.2.3 Arrays
- •2.2.4 Data frames
- •2.2.5 Factors
- •2.2.6 Lists
- •2.3 Data input
- •2.3.1 Entering data from the keyboard
- •2.3.2 Importing data from a delimited text file
- •2.3.3 Importing data from Excel
- •2.3.4 Importing data from XML
- •2.3.5 Importing data from the web
- •2.3.6 Importing data from SPSS
- •2.3.7 Importing data from SAS
- •2.3.8 Importing data from Stata
- •2.3.9 Importing data from NetCDF
- •2.3.10 Importing data from HDF5
- •2.3.11 Accessing database management systems (DBMSs)
- •2.3.12 Importing data via Stat/Transfer
- •2.4 Annotating datasets
- •2.4.1 Variable labels
- •2.4.2 Value labels
- •2.5 Useful functions for working with data objects
- •2.6 Summary
- •3 Getting started with graphs
- •3.1 Working with graphs
- •3.2 A simple example
- •3.3 Graphical parameters
- •3.3.1 Symbols and lines
- •3.3.2 Colors
- •3.3.3 Text characteristics
- •3.3.4 Graph and margin dimensions
- •3.4 Adding text, customized axes, and legends
- •3.4.1 Titles
- •3.4.2 Axes
- •3.4.3 Reference lines
- •3.4.4 Legend
- •3.4.5 Text annotations
- •3.4.6 Math annotations
- •3.5 Combining graphs
- •3.5.1 Creating a figure arrangement with fine control
- •3.6 Summary
- •4 Basic data management
- •4.1 A working example
- •4.2 Creating new variables
- •4.3 Recoding variables
- •4.4 Renaming variables
- •4.5 Missing values
- •4.5.1 Recoding values to missing
- •4.5.2 Excluding missing values from analyses
- •4.6 Date values
- •4.6.1 Converting dates to character variables
- •4.6.2 Going further
- •4.7 Type conversions
- •4.8 Sorting data
- •4.9 Merging datasets
- •4.9.1 Adding columns to a data frame
- •4.9.2 Adding rows to a data frame
- •4.10 Subsetting datasets
- •4.10.1 Selecting (keeping) variables
- •4.10.2 Excluding (dropping) variables
- •4.10.3 Selecting observations
- •4.10.4 The subset() function
- •4.10.5 Random samples
- •4.11 Using SQL statements to manipulate data frames
- •4.12 Summary
- •5 Advanced data management
- •5.2 Numerical and character functions
- •5.2.1 Mathematical functions
- •5.2.2 Statistical functions
- •5.2.3 Probability functions
- •5.2.4 Character functions
- •5.2.5 Other useful functions
- •5.2.6 Applying functions to matrices and data frames
- •5.3 A solution for the data-management challenge
- •5.4 Control flow
- •5.4.1 Repetition and looping
- •5.4.2 Conditional execution
- •5.5 User-written functions
- •5.6 Aggregation and reshaping
- •5.6.1 Transpose
- •5.6.2 Aggregating data
- •5.6.3 The reshape2 package
- •5.7 Summary
- •6 Basic graphs
- •6.1 Bar plots
- •6.1.1 Simple bar plots
- •6.1.2 Stacked and grouped bar plots
- •6.1.3 Mean bar plots
- •6.1.4 Tweaking bar plots
- •6.1.5 Spinograms
- •6.2 Pie charts
- •6.3 Histograms
- •6.4 Kernel density plots
- •6.5 Box plots
- •6.5.1 Using parallel box plots to compare groups
- •6.5.2 Violin plots
- •6.6 Dot plots
- •6.7 Summary
- •7 Basic statistics
- •7.1 Descriptive statistics
- •7.1.1 A menagerie of methods
- •7.1.2 Even more methods
- •7.1.3 Descriptive statistics by group
- •7.1.4 Additional methods by group
- •7.1.5 Visualizing results
- •7.2 Frequency and contingency tables
- •7.2.1 Generating frequency tables
- •7.2.2 Tests of independence
- •7.2.3 Measures of association
- •7.2.4 Visualizing results
- •7.3 Correlations
- •7.3.1 Types of correlations
- •7.3.2 Testing correlations for significance
- •7.3.3 Visualizing correlations
- •7.4 T-tests
- •7.4.3 When there are more than two groups
- •7.5 Nonparametric tests of group differences
- •7.5.1 Comparing two groups
- •7.5.2 Comparing more than two groups
- •7.6 Visualizing group differences
- •7.7 Summary
- •8 Regression
- •8.1 The many faces of regression
- •8.1.1 Scenarios for using OLS regression
- •8.1.2 What you need to know
- •8.2 OLS regression
- •8.2.1 Fitting regression models with lm()
- •8.2.2 Simple linear regression
- •8.2.3 Polynomial regression
- •8.2.4 Multiple linear regression
- •8.2.5 Multiple linear regression with interactions
- •8.3 Regression diagnostics
- •8.3.1 A typical approach
- •8.3.2 An enhanced approach
- •8.3.3 Global validation of linear model assumption
- •8.3.4 Multicollinearity
- •8.4 Unusual observations
- •8.4.1 Outliers
- •8.4.3 Influential observations
- •8.5 Corrective measures
- •8.5.1 Deleting observations
- •8.5.2 Transforming variables
- •8.5.3 Adding or deleting variables
- •8.5.4 Trying a different approach
- •8.6 Selecting the “best” regression model
- •8.6.1 Comparing models
- •8.6.2 Variable selection
- •8.7 Taking the analysis further
- •8.7.1 Cross-validation
- •8.7.2 Relative importance
- •8.8 Summary
- •9 Analysis of variance
- •9.1 A crash course on terminology
- •9.2 Fitting ANOVA models
- •9.2.1 The aov() function
- •9.2.2 The order of formula terms
- •9.3.1 Multiple comparisons
- •9.3.2 Assessing test assumptions
- •9.4 One-way ANCOVA
- •9.4.1 Assessing test assumptions
- •9.4.2 Visualizing the results
- •9.6 Repeated measures ANOVA
- •9.7 Multivariate analysis of variance (MANOVA)
- •9.7.1 Assessing test assumptions
- •9.7.2 Robust MANOVA
- •9.8 ANOVA as regression
- •9.9 Summary
- •10 Power analysis
- •10.1 A quick review of hypothesis testing
- •10.2 Implementing power analysis with the pwr package
- •10.2.1 t-tests
- •10.2.2 ANOVA
- •10.2.3 Correlations
- •10.2.4 Linear models
- •10.2.5 Tests of proportions
- •10.2.7 Choosing an appropriate effect size in novel situations
- •10.3 Creating power analysis plots
- •10.4 Other packages
- •10.5 Summary
- •11 Intermediate graphs
- •11.1 Scatter plots
- •11.1.3 3D scatter plots
- •11.1.4 Spinning 3D scatter plots
- •11.1.5 Bubble plots
- •11.2 Line charts
- •11.3 Corrgrams
- •11.4 Mosaic plots
- •11.5 Summary
- •12 Resampling statistics and bootstrapping
- •12.1 Permutation tests
- •12.2 Permutation tests with the coin package
- •12.2.2 Independence in contingency tables
- •12.2.3 Independence between numeric variables
- •12.2.5 Going further
- •12.3 Permutation tests with the lmPerm package
- •12.3.1 Simple and polynomial regression
- •12.3.2 Multiple regression
- •12.4 Additional comments on permutation tests
- •12.5 Bootstrapping
- •12.6 Bootstrapping with the boot package
- •12.6.1 Bootstrapping a single statistic
- •12.6.2 Bootstrapping several statistics
- •12.7 Summary
- •13 Generalized linear models
- •13.1 Generalized linear models and the glm() function
- •13.1.1 The glm() function
- •13.1.2 Supporting functions
- •13.1.3 Model fit and regression diagnostics
- •13.2 Logistic regression
- •13.2.1 Interpreting the model parameters
- •13.2.2 Assessing the impact of predictors on the probability of an outcome
- •13.2.3 Overdispersion
- •13.2.4 Extensions
- •13.3 Poisson regression
- •13.3.1 Interpreting the model parameters
- •13.3.2 Overdispersion
- •13.3.3 Extensions
- •13.4 Summary
- •14 Principal components and factor analysis
- •14.1 Principal components and factor analysis in R
- •14.2 Principal components
- •14.2.1 Selecting the number of components to extract
- •14.2.2 Extracting principal components
- •14.2.3 Rotating principal components
- •14.2.4 Obtaining principal components scores
- •14.3 Exploratory factor analysis
- •14.3.1 Deciding how many common factors to extract
- •14.3.2 Extracting common factors
- •14.3.3 Rotating factors
- •14.3.4 Factor scores
- •14.4 Other latent variable models
- •14.5 Summary
- •15 Time series
- •15.1 Creating a time-series object in R
- •15.2 Smoothing and seasonal decomposition
- •15.2.1 Smoothing with simple moving averages
- •15.2.2 Seasonal decomposition
- •15.3 Exponential forecasting models
- •15.3.1 Simple exponential smoothing
- •15.3.3 The ets() function and automated forecasting
- •15.4 ARIMA forecasting models
- •15.4.1 Prerequisite concepts
- •15.4.2 ARMA and ARIMA models
- •15.4.3 Automated ARIMA forecasting
- •15.5 Going further
- •15.6 Summary
- •16 Cluster analysis
- •16.1 Common steps in cluster analysis
- •16.2 Calculating distances
- •16.3 Hierarchical cluster analysis
- •16.4 Partitioning cluster analysis
- •16.4.2 Partitioning around medoids
- •16.5 Avoiding nonexistent clusters
- •16.6 Summary
- •17 Classification
- •17.1 Preparing the data
- •17.2 Logistic regression
- •17.3 Decision trees
- •17.3.1 Classical decision trees
- •17.3.2 Conditional inference trees
- •17.4 Random forests
- •17.5 Support vector machines
- •17.5.1 Tuning an SVM
- •17.6 Choosing a best predictive solution
- •17.7 Using the rattle package for data mining
- •17.8 Summary
- •18 Advanced methods for missing data
- •18.1 Steps in dealing with missing data
- •18.2 Identifying missing values
- •18.3 Exploring missing-values patterns
- •18.3.1 Tabulating missing values
- •18.3.2 Exploring missing data visually
- •18.3.3 Using correlations to explore missing values
- •18.4 Understanding the sources and impact of missing data
- •18.5 Rational approaches for dealing with incomplete data
- •18.6 Complete-case analysis (listwise deletion)
- •18.7 Multiple imputation
- •18.8 Other approaches to missing data
- •18.8.1 Pairwise deletion
- •18.8.2 Simple (nonstochastic) imputation
- •18.9 Summary
- •19 Advanced graphics with ggplot2
- •19.1 The four graphics systems in R
- •19.2 An introduction to the ggplot2 package
- •19.3 Specifying the plot type with geoms
- •19.4 Grouping
- •19.5 Faceting
- •19.6 Adding smoothed lines
- •19.7 Modifying the appearance of ggplot2 graphs
- •19.7.1 Axes
- •19.7.2 Legends
- •19.7.3 Scales
- •19.7.4 Themes
- •19.7.5 Multiple graphs per page
- •19.8 Saving graphs
- •19.9 Summary
- •20 Advanced programming
- •20.1 A review of the language
- •20.1.1 Data types
- •20.1.2 Control structures
- •20.1.3 Creating functions
- •20.2 Working with environments
- •20.3 Object-oriented programming
- •20.3.1 Generic functions
- •20.3.2 Limitations of the S3 model
- •20.4 Writing efficient code
- •20.5 Debugging
- •20.5.1 Common sources of errors
- •20.5.2 Debugging tools
- •20.5.3 Session options that support debugging
- •20.6 Going further
- •20.7 Summary
- •21 Creating a package
- •21.1 Nonparametric analysis and the npar package
- •21.1.1 Comparing groups with the npar package
- •21.2 Developing the package
- •21.2.1 Computing the statistics
- •21.2.2 Printing the results
- •21.2.3 Summarizing the results
- •21.2.4 Plotting the results
- •21.2.5 Adding sample data to the package
- •21.3 Creating the package documentation
- •21.4 Building the package
- •21.5 Going further
- •21.6 Summary
- •22 Creating dynamic reports
- •22.1 A template approach to reports
- •22.2 Creating dynamic reports with R and Markdown
- •22.3 Creating dynamic reports with R and LaTeX
- •22.4 Creating dynamic reports with R and Open Document
- •22.5 Creating dynamic reports with R and Microsoft Word
- •22.6 Summary
- •afterword Into the rabbit hole
- •appendix A Graphical user interfaces
- •appendix B Customizing the startup environment
- •appendix C Exporting data from R
- •Delimited text file
- •Excel spreadsheet
- •Statistical applications
- •appendix D Matrix algebra in R
- •appendix E Packages used in this book
- •appendix F Working with large datasets
- •F.1 Efficient programming
- •F.2 Storing data outside of RAM
- •F.3 Analytic packages for out-of-memory data
- •F.4 Comprehensive solutions for working with enormous datasets
- •appendix G Updating an R installation
- •G.1 Automated installation (Windows only)
- •G.2 Manual installation (Windows and Mac OS X)
- •G.3 Updating an R installation (Linux)
- •references
- •index
- •Symbols
- •Numerics
- •23.1 The lattice package
- •23.2 Conditioning variables
- •23.3 Panel functions
- •23.4 Grouping variables
- •23.5 Graphic parameters
- •23.6 Customizing plot strips
- •23.7 Page arrangement
- •23.8 Going further
496 |
|
|
|
|
|
|
|
|
CHAPTER 21 Creating a package |
|
|
|
|
|
||||||||||
|
|
|
|
|
|
|
|
|
Multiple Comparisons |
|
|
|
|
|
||||||||||
|
|
|
|
|
md=13 |
|
md=15.4 |
|
md=15.6 |
md=15.7 |
|
|||||||||||||
|
|
|
|
|
n=16 |
|
|
n=12 |
|
|
n=13 |
|
n=9 |
|
||||||||||
Age 65 |
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
15 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
(years) at |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
Life Expectancy |
13 14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
Healthy |
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
South |
|
North Central |
|
|
West |
Northeast |
|
US Region
Figure 21.3 Annotated box plots displaying group differences. The plot is annotated with the medians and sample sizes for each group. The dotted vertical line represents the overall median.
family-wise error rate (the probability of finding one or more erroneous differences in a set of comparisons) at a reasonable level (say, .05).
The oneway() function accomplishes this by calling the p.adjust() function in the base R installation. The p.adjust() function adjusts p-values to account for multiple comparisons using one of several methods. The Bonferonni correction is perhaps the most well-known, but the Holm correction is more powerful and thus set as the default.
Differences among the groups are easiest to see with a graph. The plot() statement eproduces the side-by-side box plots in figure 21.3. The plot is annotated with a key that indicates the median and sample size for each group. A dotted horizontal line indicates the overall median for all observations combined.
It’s clear from these analyses that women in the South can expect fewer years of health past age 65. This has implications for the distribution and focus of health services. You might want to analyze the HLE estimates for males and see if you reach a similar conclusion.
The next section describes the code files for the npar package. You can download them (and save yourself some typing) from www.statmethods.net/RiA/nparFiles.zip.
21.2 Developing the package
The npar package consists of four functions: oneway(), print.oneway(), summary
.oneway(), and plot.oneway(). The first is the primary function that computes the statistics, and the others are S3 object-oriented generic functions (see section 20.3.1)
Developing the package |
497 |
used to print and plot the results. Here, oneway indicates that there is a single grouping factor.
It’s a good idea to place each function in a separate text file with a .R extension. This isn’t strictly necessary, but it makes organizing the work easier. Additionally, it isn’t necessary for the names of the functions and the names of the files to match, but again, it’s good coding practice. The files are provided in listings 21.2 through 21.5.
Each file has a header consisting of a set of comments that start with the characters #'. The R interpreter ignores these lines, but you’ll use the roxygen2 package to turn the comments into your package documentation. These header comments will be discussed in section 21.3.
The oneway() function computes the statistics, and the print(), summary(), and plot() functions display the results. In the next section, you’ll develop the oneway() function.
21.2.1Computing the statistics
The oneway() function in the oneway.R text file performs all the statistical computations required.
Listing 21.2 Contents of the oneway.R file
#' @title Nonparametric group comparisons #'
#' @description
#' \code{oneway} computes nonparametric group comparisons, including an #' omnibus test and post-hoc pairwise group comparisons.
#'
#' @details
#' This function computes an omnibus Kruskal-Wallis test that the #' groups are equal, followed by all pairwise comparisons using
#' Wilcoxon Rank Sum tests. Exact Wilcoxon tests can be requested if #' there are no ties on the dependent variable. The p-values are
#' adjusted for multiple comparisons using the \code{\link{p.adjust}} #' function.
#'
#' @param formula an object of class formula, relating the dependent #' variable to the grouping variable.
#' @param data a data frame containing the variables in the model.
#' @param exact logical. If \code{TRUE}, calculate exact Wilcoxon tests. #' @param sort logical. If \code{TRUE}, sort groups by median dependent #' variable values.
#' @param method method for correcting p-values for multiple comparisons. #' @export
#' @return a list with 7 elements: #' \item{CALL}{function call}
#' \item{data}{data frame containing the depending and grouping variable} #' \item{sumstats}{data frame with descriptive statistics by group}
#' \item{kw}{results of the Kruskal-Wallis test} #' \item{method}{method used to adjust p-values}
#' \item{wmc}{data frame containing the multiple comparisons} #' \item{vnames}{variable names}
#' @author Rob Kabacoff <rkabacoff@@statmethods.net>
498 |
CHAPTER 21 Creating a package |
|
|
|
|
|
|||
#' @examples |
|
|
|
|
|
|
|||
#' results <- oneway(hlef ~ region, life) |
|
|
|
|
|
|
|||
#' summary(results) |
|
|
|
|
|
|
|||
#' plot(results, col="lightblue", main="Multiple Comparisons", |
|
|
|||||||
#' |
xlab="US Region", ylab="Healthy Life Expectancy at Age 65") |
|
|
||||||
oneway <- function(formula, data, exact=FALSE, sort=TRUE, |
|
b Function |
|||||||
|
|||||||||
|
method=c("holm", "hochberg", "hommel", "bonferroni", |
|
|||||||
|
"BH", "BY", "fdr", "none")){ |
|
|
|
|
call |
|||
|
|
|
|
|
|
|
|||
|
if (missing(formula) || class(formula) != "formula" || |
|
|
|
|||||
|
length(all.vars(formula)) != 2) |
|
|
|
c Checks arguments |
||||
|
stop("'formula' is missing or incorrect") |
|
|
||||||
|
|
|
|
|
|
||||
|
method <- match.arg(method) |
|
|
|
|
|
|
||
|
df <- model.frame(formula, data) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
y <- df[[1]] |
|
|
d Sets up data |
|
|
|
|
|
|
g <- as.factor(df[[2]]) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
vnames <- names(df) |
|
|
|
|
|
|
|
|
|
if(sort) g <- reorder(g, y, FUN=median) |
|
|
|
|
|
|
||
|
groups <- levels(g) |
|
|
|
|||||
|
|
e Reorders factor levels |
|||||||
|
k <- nlevels(g) |
|
|
|
|
|
|
getstats <- function(x)(c(N = length(x), Median = median(x), MAD = mad(x)))
sumstats <- t(aggregate(y, by=list(g), FUN=getstats)[2]) rownames(sumstats) <- c("n", "median", "mad") colnames(sumstats) <- groups
kw <- kruskal.test(formula, data) wmc <- NULL
for (i in 1:(k-1)){ for (j in (i+1):k){
y1 <- y[g==groups[i]]
y2 <- y[g==groups[j]]
test <- wilcox.test(y1, y2, exact=exact)
r <- data.frame(Group.1=groups[i], Group.2=groups[j], W=test$statistic[[1]], p=test$p.value)
# note the [[]] to return a single number wmc <- rbind(wmc, r)
}
}
wmc$p <- p.adjust(wmc$p, method=method)
data <- data.frame(y, g) names(data) <- vnames
results <- list(CALL = match.call(), data=data, sumstats=sumstats, kw=kw,
method=method, wmc=wmc, vnames=vnames) class(results) <- c("oneway", "list")
return(results)
fSummary statistics
gStatistical tests
h Returns results
}
The header contains comments starting with #' that will be used by the roxygen2 package to create package documentation (see section 21.3). Next you see the
Developing the package |
499 |
function argument list b. The user provides a formula of the form dependent variable~grouping variable and a data frame containing the data. By default, approximate p-values are computed, and the groups are ordered by their median dependent variable values. The user can choose from among eight adjustment methods, with the holm method (the first option in the list) chosen by default.
Once the user enters the arguments, they’re scanned for errors c. The if() function tests that the formula isn’t missing, that it’s a formula (variables ~ variables), and that there is only one variable on each side of the tilde (~). If any of these three conditions isn’t true, the stop() function halts execution, prints an error message, and returns the user to the R prompt. For debugging purposes, you can alter the error action with the options(error=) function. See section 20.5.3 for details.
The match.arg(arg, choices) function ensures that the user has entered an argument that matches one of the strings in the choices character vector. If a match isn’t found, an error is thrown, and, again, oneway() exits.
Next, the model.frame() function is used to create a data frame containing the dependent variable as the first column and the grouping variable as the second column d. In general, model.frame() returns a data frame containing all the variables in a formula. From this data frame, you create a numeric vector (y) containing the dependent variable and a factor vector (g) containing the grouping variable. The character vector vnames contains the variable names.
If sort=TRUE, you use the reorder() function to reorder the levels of the grouping variable g by the median dependent variable values y e. This is the default. The character vector groups contains the names of the groups, and the value k contains the number of groups.
Next, a numeric matrix (sumstats) is created, containing the sample size, median, and median absolute deviation for each group f. The aggregate() function uses the getstats() function to calculate the summary statistics, and the remaining code formats the table so that groups are columns and statistics are rows (I thought this was more attractive).
The statistical tests are then computed g. The results of the Kruskal–Wallis test are saved to a list called kw. The for() functions calculate every pairwise Wilcoxon test. The results of these pairwise tests are saved in the wmc data frame:
|
Group.1 |
Group.2 |
W |
p |
1 |
South North Central 28.0 0.008583 |
|||
2 |
South |
West 27.0 0.004738 |
||
3 |
South |
Northeast 17.0 0.008583 |
||
4 |
North Central |
West 63.5 |
1.000000 |
|
5 |
North Central |
Northeast 42.0 |
1.000000 |
|
6 |
West |
Northeast 54.5 |
1.000000 |
Here, Group.1 and Group.2 indicate the groups being compared to each other, W is the Wilcoxon statistic, and p is the (adjusted) p-value for each comparison.
Finally, the results are bundled up and returned as a list h. The list contains seven components, which are summarized in table 21.1. Additionally, you set the class of the