- •brief contents
- •contents
- •preface
- •acknowledgments
- •about this book
- •What’s new in the second edition
- •Who should read this book
- •Roadmap
- •Advice for data miners
- •Code examples
- •Code conventions
- •Author Online
- •About the author
- •about the cover illustration
- •1 Introduction to R
- •1.2 Obtaining and installing R
- •1.3 Working with R
- •1.3.1 Getting started
- •1.3.2 Getting help
- •1.3.3 The workspace
- •1.3.4 Input and output
- •1.4 Packages
- •1.4.1 What are packages?
- •1.4.2 Installing a package
- •1.4.3 Loading a package
- •1.4.4 Learning about a package
- •1.5 Batch processing
- •1.6 Using output as input: reusing results
- •1.7 Working with large datasets
- •1.8 Working through an example
- •1.9 Summary
- •2 Creating a dataset
- •2.1 Understanding datasets
- •2.2 Data structures
- •2.2.1 Vectors
- •2.2.2 Matrices
- •2.2.3 Arrays
- •2.2.4 Data frames
- •2.2.5 Factors
- •2.2.6 Lists
- •2.3 Data input
- •2.3.1 Entering data from the keyboard
- •2.3.2 Importing data from a delimited text file
- •2.3.3 Importing data from Excel
- •2.3.4 Importing data from XML
- •2.3.5 Importing data from the web
- •2.3.6 Importing data from SPSS
- •2.3.7 Importing data from SAS
- •2.3.8 Importing data from Stata
- •2.3.9 Importing data from NetCDF
- •2.3.10 Importing data from HDF5
- •2.3.11 Accessing database management systems (DBMSs)
- •2.3.12 Importing data via Stat/Transfer
- •2.4 Annotating datasets
- •2.4.1 Variable labels
- •2.4.2 Value labels
- •2.5 Useful functions for working with data objects
- •2.6 Summary
- •3 Getting started with graphs
- •3.1 Working with graphs
- •3.2 A simple example
- •3.3 Graphical parameters
- •3.3.1 Symbols and lines
- •3.3.2 Colors
- •3.3.3 Text characteristics
- •3.3.4 Graph and margin dimensions
- •3.4 Adding text, customized axes, and legends
- •3.4.1 Titles
- •3.4.2 Axes
- •3.4.3 Reference lines
- •3.4.4 Legend
- •3.4.5 Text annotations
- •3.4.6 Math annotations
- •3.5 Combining graphs
- •3.5.1 Creating a figure arrangement with fine control
- •3.6 Summary
- •4 Basic data management
- •4.1 A working example
- •4.2 Creating new variables
- •4.3 Recoding variables
- •4.4 Renaming variables
- •4.5 Missing values
- •4.5.1 Recoding values to missing
- •4.5.2 Excluding missing values from analyses
- •4.6 Date values
- •4.6.1 Converting dates to character variables
- •4.6.2 Going further
- •4.7 Type conversions
- •4.8 Sorting data
- •4.9 Merging datasets
- •4.9.1 Adding columns to a data frame
- •4.9.2 Adding rows to a data frame
- •4.10 Subsetting datasets
- •4.10.1 Selecting (keeping) variables
- •4.10.2 Excluding (dropping) variables
- •4.10.3 Selecting observations
- •4.10.4 The subset() function
- •4.10.5 Random samples
- •4.11 Using SQL statements to manipulate data frames
- •4.12 Summary
- •5 Advanced data management
- •5.2 Numerical and character functions
- •5.2.1 Mathematical functions
- •5.2.2 Statistical functions
- •5.2.3 Probability functions
- •5.2.4 Character functions
- •5.2.5 Other useful functions
- •5.2.6 Applying functions to matrices and data frames
- •5.3 A solution for the data-management challenge
- •5.4 Control flow
- •5.4.1 Repetition and looping
- •5.4.2 Conditional execution
- •5.5 User-written functions
- •5.6 Aggregation and reshaping
- •5.6.1 Transpose
- •5.6.2 Aggregating data
- •5.6.3 The reshape2 package
- •5.7 Summary
- •6 Basic graphs
- •6.1 Bar plots
- •6.1.1 Simple bar plots
- •6.1.2 Stacked and grouped bar plots
- •6.1.3 Mean bar plots
- •6.1.4 Tweaking bar plots
- •6.1.5 Spinograms
- •6.2 Pie charts
- •6.3 Histograms
- •6.4 Kernel density plots
- •6.5 Box plots
- •6.5.1 Using parallel box plots to compare groups
- •6.5.2 Violin plots
- •6.6 Dot plots
- •6.7 Summary
- •7 Basic statistics
- •7.1 Descriptive statistics
- •7.1.1 A menagerie of methods
- •7.1.2 Even more methods
- •7.1.3 Descriptive statistics by group
- •7.1.4 Additional methods by group
- •7.1.5 Visualizing results
- •7.2 Frequency and contingency tables
- •7.2.1 Generating frequency tables
- •7.2.2 Tests of independence
- •7.2.3 Measures of association
- •7.2.4 Visualizing results
- •7.3 Correlations
- •7.3.1 Types of correlations
- •7.3.2 Testing correlations for significance
- •7.3.3 Visualizing correlations
- •7.4 T-tests
- •7.4.3 When there are more than two groups
- •7.5 Nonparametric tests of group differences
- •7.5.1 Comparing two groups
- •7.5.2 Comparing more than two groups
- •7.6 Visualizing group differences
- •7.7 Summary
- •8 Regression
- •8.1 The many faces of regression
- •8.1.1 Scenarios for using OLS regression
- •8.1.2 What you need to know
- •8.2 OLS regression
- •8.2.1 Fitting regression models with lm()
- •8.2.2 Simple linear regression
- •8.2.3 Polynomial regression
- •8.2.4 Multiple linear regression
- •8.2.5 Multiple linear regression with interactions
- •8.3 Regression diagnostics
- •8.3.1 A typical approach
- •8.3.2 An enhanced approach
- •8.3.3 Global validation of linear model assumption
- •8.3.4 Multicollinearity
- •8.4 Unusual observations
- •8.4.1 Outliers
- •8.4.3 Influential observations
- •8.5 Corrective measures
- •8.5.1 Deleting observations
- •8.5.2 Transforming variables
- •8.5.3 Adding or deleting variables
- •8.5.4 Trying a different approach
- •8.6 Selecting the “best” regression model
- •8.6.1 Comparing models
- •8.6.2 Variable selection
- •8.7 Taking the analysis further
- •8.7.1 Cross-validation
- •8.7.2 Relative importance
- •8.8 Summary
- •9 Analysis of variance
- •9.1 A crash course on terminology
- •9.2 Fitting ANOVA models
- •9.2.1 The aov() function
- •9.2.2 The order of formula terms
- •9.3.1 Multiple comparisons
- •9.3.2 Assessing test assumptions
- •9.4 One-way ANCOVA
- •9.4.1 Assessing test assumptions
- •9.4.2 Visualizing the results
- •9.6 Repeated measures ANOVA
- •9.7 Multivariate analysis of variance (MANOVA)
- •9.7.1 Assessing test assumptions
- •9.7.2 Robust MANOVA
- •9.8 ANOVA as regression
- •9.9 Summary
- •10 Power analysis
- •10.1 A quick review of hypothesis testing
- •10.2 Implementing power analysis with the pwr package
- •10.2.1 t-tests
- •10.2.2 ANOVA
- •10.2.3 Correlations
- •10.2.4 Linear models
- •10.2.5 Tests of proportions
- •10.2.7 Choosing an appropriate effect size in novel situations
- •10.3 Creating power analysis plots
- •10.4 Other packages
- •10.5 Summary
- •11 Intermediate graphs
- •11.1 Scatter plots
- •11.1.3 3D scatter plots
- •11.1.4 Spinning 3D scatter plots
- •11.1.5 Bubble plots
- •11.2 Line charts
- •11.3 Corrgrams
- •11.4 Mosaic plots
- •11.5 Summary
- •12 Resampling statistics and bootstrapping
- •12.1 Permutation tests
- •12.2 Permutation tests with the coin package
- •12.2.2 Independence in contingency tables
- •12.2.3 Independence between numeric variables
- •12.2.5 Going further
- •12.3 Permutation tests with the lmPerm package
- •12.3.1 Simple and polynomial regression
- •12.3.2 Multiple regression
- •12.4 Additional comments on permutation tests
- •12.5 Bootstrapping
- •12.6 Bootstrapping with the boot package
- •12.6.1 Bootstrapping a single statistic
- •12.6.2 Bootstrapping several statistics
- •12.7 Summary
- •13 Generalized linear models
- •13.1 Generalized linear models and the glm() function
- •13.1.1 The glm() function
- •13.1.2 Supporting functions
- •13.1.3 Model fit and regression diagnostics
- •13.2 Logistic regression
- •13.2.1 Interpreting the model parameters
- •13.2.2 Assessing the impact of predictors on the probability of an outcome
- •13.2.3 Overdispersion
- •13.2.4 Extensions
- •13.3 Poisson regression
- •13.3.1 Interpreting the model parameters
- •13.3.2 Overdispersion
- •13.3.3 Extensions
- •13.4 Summary
- •14 Principal components and factor analysis
- •14.1 Principal components and factor analysis in R
- •14.2 Principal components
- •14.2.1 Selecting the number of components to extract
- •14.2.2 Extracting principal components
- •14.2.3 Rotating principal components
- •14.2.4 Obtaining principal components scores
- •14.3 Exploratory factor analysis
- •14.3.1 Deciding how many common factors to extract
- •14.3.2 Extracting common factors
- •14.3.3 Rotating factors
- •14.3.4 Factor scores
- •14.4 Other latent variable models
- •14.5 Summary
- •15 Time series
- •15.1 Creating a time-series object in R
- •15.2 Smoothing and seasonal decomposition
- •15.2.1 Smoothing with simple moving averages
- •15.2.2 Seasonal decomposition
- •15.3 Exponential forecasting models
- •15.3.1 Simple exponential smoothing
- •15.3.3 The ets() function and automated forecasting
- •15.4 ARIMA forecasting models
- •15.4.1 Prerequisite concepts
- •15.4.2 ARMA and ARIMA models
- •15.4.3 Automated ARIMA forecasting
- •15.5 Going further
- •15.6 Summary
- •16 Cluster analysis
- •16.1 Common steps in cluster analysis
- •16.2 Calculating distances
- •16.3 Hierarchical cluster analysis
- •16.4 Partitioning cluster analysis
- •16.4.2 Partitioning around medoids
- •16.5 Avoiding nonexistent clusters
- •16.6 Summary
- •17 Classification
- •17.1 Preparing the data
- •17.2 Logistic regression
- •17.3 Decision trees
- •17.3.1 Classical decision trees
- •17.3.2 Conditional inference trees
- •17.4 Random forests
- •17.5 Support vector machines
- •17.5.1 Tuning an SVM
- •17.6 Choosing a best predictive solution
- •17.7 Using the rattle package for data mining
- •17.8 Summary
- •18 Advanced methods for missing data
- •18.1 Steps in dealing with missing data
- •18.2 Identifying missing values
- •18.3 Exploring missing-values patterns
- •18.3.1 Tabulating missing values
- •18.3.2 Exploring missing data visually
- •18.3.3 Using correlations to explore missing values
- •18.4 Understanding the sources and impact of missing data
- •18.5 Rational approaches for dealing with incomplete data
- •18.6 Complete-case analysis (listwise deletion)
- •18.7 Multiple imputation
- •18.8 Other approaches to missing data
- •18.8.1 Pairwise deletion
- •18.8.2 Simple (nonstochastic) imputation
- •18.9 Summary
- •19 Advanced graphics with ggplot2
- •19.1 The four graphics systems in R
- •19.2 An introduction to the ggplot2 package
- •19.3 Specifying the plot type with geoms
- •19.4 Grouping
- •19.5 Faceting
- •19.6 Adding smoothed lines
- •19.7 Modifying the appearance of ggplot2 graphs
- •19.7.1 Axes
- •19.7.2 Legends
- •19.7.3 Scales
- •19.7.4 Themes
- •19.7.5 Multiple graphs per page
- •19.8 Saving graphs
- •19.9 Summary
- •20 Advanced programming
- •20.1 A review of the language
- •20.1.1 Data types
- •20.1.2 Control structures
- •20.1.3 Creating functions
- •20.2 Working with environments
- •20.3 Object-oriented programming
- •20.3.1 Generic functions
- •20.3.2 Limitations of the S3 model
- •20.4 Writing efficient code
- •20.5 Debugging
- •20.5.1 Common sources of errors
- •20.5.2 Debugging tools
- •20.5.3 Session options that support debugging
- •20.6 Going further
- •20.7 Summary
- •21 Creating a package
- •21.1 Nonparametric analysis and the npar package
- •21.1.1 Comparing groups with the npar package
- •21.2 Developing the package
- •21.2.1 Computing the statistics
- •21.2.2 Printing the results
- •21.2.3 Summarizing the results
- •21.2.4 Plotting the results
- •21.2.5 Adding sample data to the package
- •21.3 Creating the package documentation
- •21.4 Building the package
- •21.5 Going further
- •21.6 Summary
- •22 Creating dynamic reports
- •22.1 A template approach to reports
- •22.2 Creating dynamic reports with R and Markdown
- •22.3 Creating dynamic reports with R and LaTeX
- •22.4 Creating dynamic reports with R and Open Document
- •22.5 Creating dynamic reports with R and Microsoft Word
- •22.6 Summary
- •afterword Into the rabbit hole
- •appendix A Graphical user interfaces
- •appendix B Customizing the startup environment
- •appendix C Exporting data from R
- •Delimited text file
- •Excel spreadsheet
- •Statistical applications
- •appendix D Matrix algebra in R
- •appendix E Packages used in this book
- •appendix F Working with large datasets
- •F.1 Efficient programming
- •F.2 Storing data outside of RAM
- •F.3 Analytic packages for out-of-memory data
- •F.4 Comprehensive solutions for working with enormous datasets
- •appendix G Updating an R installation
- •G.1 Automated installation (Windows only)
- •G.2 Manual installation (Windows and Mac OS X)
- •G.3 Updating an R installation (Linux)
- •references
- •index
- •Symbols
- •Numerics
- •23.1 The lattice package
- •23.2 Conditioning variables
- •23.3 Panel functions
- •23.4 Grouping variables
- •23.5 Graphic parameters
- •23.6 Customizing plot strips
- •23.7 Page arrangement
- •23.8 Going further
Numerical and character functions |
91 |
5.2Numerical and character functions
In this section, we’ll review functions in R that can be used as the basic building blocks for manipulating data. They can be divided into numerical (mathematical, statistical, probability) and character functions. After we review each type, I’ll show you how to apply functions to the columns (variables) and rows (observations) of matrices and data frames (see section 5.2.6).
5.2.1Mathematical functions
Table 5.2 lists common mathematical functions along with short examples.
Table 5.2 Mathematical functions
Function |
Description |
|
|
abs(x) |
Absolute value |
|
abs(-4) returns 4. |
sqrt(x) |
Square root |
|
sqrt(25) returns 5. This is the same as 25^(0.5). |
ceiling(x) |
Smallest integer not less than x |
|
ceiling(3.475) returns 4. |
floor(x) |
Largest integer not greater than x |
|
floor(3.475) returns 3. |
trunc(x) |
Integer formed by truncating values in x toward 0 |
|
trunc(5.99) returns 5. |
round(x, digits=n) |
Rounds x to the specified number of decimal places |
|
round(3.475, digits=2) returns 3.48. |
signif(x, digits=n) |
Rounds x to the specified number of significant digits |
|
signif(3.475, digits=2) returns 3.5. |
cos(x), sin(x), tan(x) |
Cosine, sine, and tangent |
|
cos(2) returns –0.416. |
acos(x), asin(x), atan(x) |
Arc-cosine, arc-sine, and arc-tangent |
|
acos(-0.416) returns 2. |
cosh(x), sinh(x), tanh(x) |
Hyperbolic cosine, sine, and tangent |
|
sinh(2) returns 3.627. |
acosh(x), asinh(x), atanh(x) |
Hyperbolic arc-cosine, arc-sine, and arc-tangent |
|
asinh(3.627) returns 2. |
log(x,base=n) |
Logarithm of x to the base n |
log(x) |
For convenience: |
log10(x) |
■ log(x) is the natural logarithm. |
|
■ log10(x) is the common logarithm. |
|
■ log(10) returns 2.3026. |
|
■ log10(10) returns 1. |
exp(x) |
Exponential function |
|
exp(2.3026) returns 10. |
|
|
92 |
CHAPTER 5 Advanced data management |
Data transformation is one of the primary uses for these functions. For example, you often transform positively skewed variables such as income to a log scale before further analyses. Mathematical functions are also used as components in formulas, in plotting functions (for example, x versus sin(x)), and in formatting numerical values prior to printing.
The examples in table 5.2 apply mathematical functions to scalars (individual numbers). When these functions are applied to numeric vectors, matrices, or data frames, they operate on each individual value. For example, sqrt(c(4, 16, 25)) returns c(2, 4, 5).
5.2.2Statistical functions
Common statistical functions are presented in table 5.3. Many of these functions have optional parameters that affect the outcome. For example,
y <- mean(x)
provides the arithmetic mean of the elements in object x, and
z <- mean(x, trim = 0.05, na.rm=TRUE)
provides the trimmed mean, dropping the highest and lowest 5% of scores and any missing values. Use the help() function to learn more about each function and its arguments.
Table 5.3 Statistical functions
Function |
Description |
|
|
mean(x) |
Mean |
|
mean(c(1,2,3,4)) returns 2.5. |
median(x) |
Median |
|
median(c(1,2,3,4)) returns 2.5. |
sd(x) |
Standard deviation |
|
sd(c(1,2,3,4)) returns 1.29. |
var(x) |
Variance |
|
var(c(1,2,3,4)) returns 1.67. |
mad(x) |
Median absolute deviation |
|
mad(c(1,2,3,4)) returns 1.48. |
quantile(x, |
Quantiles where x is the numeric vector, where quantiles are desired and |
probs) |
probs is a numeric vector with probabilities in [0,1] |
|
# 30th and 84th percentiles of x |
|
y <- quantile(x, c(.3,.84)) |
range(x) |
Range |
|
x <- c(1,2,3,4) |
|
range(x) returns c(1,4). |
|
diff(range(x)) returns 3. |
sum(x) |
Sum |
|
sum(c(1,2,3,4)) returns 10. |
|
|
Numerical and character functions |
93 |
Table 5.3 Statistical functions
Function |
Description |
|
|
diff(x, lag=n) |
Lagged differences, with lag indicating which lag to use. The default lag is 1. |
|
x<- c(1, 5, 23, 29) |
|
diff(x) returns c(4, 18, 6). |
min(x) |
Minimum |
|
min(c(1,2,3,4)) returns 1. |
max(x) |
Maximum |
|
max(c(1,2,3,4)) returns 4. |
scale(x, |
Column center (center=TRUE) or standardize (center=TRUE, |
center=TRUE, |
scale=TRUE) data object x. An example is given in listing 5.6. |
scale=TRUE) |
|
|
|
To see these functions in action, look at the next listing. This example demonstrates two ways to calculate the mean and standard deviation of a vector of numbers.
Listing 5.1 Calculating the mean and standard deviation
>x <- c(1,2,3,4,5,6,7,8)
>mean(x)
[1] 4.5 > sd(x)
[1] 2.449490
>n <- length(x)
>meanx <- sum(x)/n
>css <- sum((x - meanx)^2)
>sdx <- sqrt(css / (n-1))
>meanx
[1] |
4.5 |
> sdx |
|
[1] |
2.449490 |
Short way
Long way
It’s instructive to view how the corrected sum of squares (css) is calculated in the second approach:
1x equals c(1, 2, 3, 4, 5, 6, 7, 8), and mean x equals 4.5 (length(x) returns the number of elements in x).
2(x – meanx) subtracts 4.5 from each element of x, resulting in
c(-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5)
3 (x – meanx)^2 squares each element of (x - meanx), resulting in
c(12.25, 6.25, 2.25, 0.25, 0.25, 2.25, 6.25, 12.25)
4 sum((x - meanx)^2) sums each of the elements of (x - meanx)^2), resulting in 42.
Writing formulas in R has much in common with matrix-manipulation languages such as MATLAB (we’ll look more specifically at solving matrix algebra problems in appendix D).
94 |
CHAPTER 5 Advanced data management |
Standardizing data
By default, the scale() function standardizes the specified columns of a matrix or data frame to a mean of 0 and a standard deviation of 1:
newdata <- scale(mydata)
To standardize each column to an arbitrary mean and standard deviation, you can use code similar to the following
newdata <- scale(mydata)*SD + M
where M is the desired mean and SD is the desired standard deviation. Using the scale() function on non-numeric columns produces an error. To standardize a specific column rather than an entire matrix or data frame, you can use code such as this:
newdata <- transform(mydata, myvar = scale(myvar)*10+50)
This code standardizes the variable myvar to a mean of 50 and standard deviation of 10. You’ll use the scale() function in the solution to the data-management challenge in section 5.3.
5.2.3Probability functions
You may wonder why probability functions aren’t listed with the statistical functions (it was really bothering you, wasn’t it?). Although probability functions are statistical by definition, they’re unique enough to deserve their own section. Probability functions are often used to generate simulated data with known characteristics and to calculate probability values within user-written statistical functions.
In R, probability functions take the form
[dpqr]distribution_abbreviation()
where the first letter refers to the aspect of the distribution returned:
d = density
p= distribution function
q= quantile function
r= random generation (random deviates)
The common probability functions are listed in table 5.4.
Table 5.4 Probability distributions
Distribution |
Abbreviation |
Distribution |
Abbreviation |
|
|
|
|
Beta |
beta |
Logistic |
logis |
Binomial |
binom |
Multinomial |
multinom |
Cauchy |
cauchy |
Negative binomial |
nbinom |
Chi-squared (noncentral) |
chisq |
Normal |
norm |
Exponential |
exp |
Poisson |
pois |
|
|
|
|
|
Numerical and character functions |
95 |
||
Table 5.4 Probability distributions |
|
|
||
|
|
|
|
|
Distribution |
|
Abbreviation |
Distribution |
Abbreviation |
|
|
|
|
|
F |
|
f |
Wilcoxon signed rank |
signrank |
Gamma |
|
gamma |
T |
t |
Geometric |
|
geom |
Uniform |
unif |
Hypergeometric |
|
hyper |
Weibull |
weibull |
Lognormal |
|
lnorm |
Wilcoxon rank sum |
wilcox |
|
|
|
|
|
To see how these work, let’s look at functions related to the normal distribution. If you don’t specify a mean and a standard deviation, the standard normal distribution is assumed (mean=0, sd=1). Examples of the density (dnorm), distribution (pnorm), quantile (qnorm), and random deviate generation (rnorm) functions are given in table 5.5.
Table 5.5 Normal distribution functions
|
|
|
Problem |
|
|
|
Solution |
|
Plot the standard normal curve on the interval |
x <- pretty(c(-3,3), 30) |
|||||||
[–3,3] (see figure). |
|
|
|
|
|
y <- dnorm(x) |
||
|
|
|
|
|
|
|
|
plot(x, y, |
|
|
|
|
|
|
|
|
type = "l", |
|
|
|
|
|
|
|
|
xlab = "Normal Deviate", |
|
0.3 |
|
|
|
|
|
|
ylab = "Density", |
|
|
|
|
|
|
|
yaxs = "i" |
|
|
|
|
|
|
|
|
|
|
Density |
|
|
|
|
|
|
|
) |
0.2 |
|
|
|
|
|
|
|
|
|
0.1 |
|
|
|
|
|
|
|
|
−3 |
−2 |
−1 |
0 |
1 |
2 |
3 |
|
Normal Deviate
What is the area under the standard normal |
pnorm(1.96)equals 0.975. |
curve to the left of z=1.96? |
|
What is the value of the 90th percentile of a qnorm(.9, mean=500, sd=100) equals 628.16. normal distribution with a mean of 500 and a
standard deviation of 100?
Generate 50 random normal deviates with a |
rnorm(50, mean=50, sd=10) |
mean of 50 and a standard deviation of 10. |
|
Don’t worry if the plot() function options are unfamiliar. They’re covered in detail in chapter 11; pretty() is explained in table 5.7 later in this chapter.
96 |
CHAPTER 5 Advanced data management |
SETTING THE SEED FOR RANDOM NUMBER GENERATION
Each time you generate pseudo-random deviates, a different seed, and therefore different results, are produced. To make your results reproducible, you can specify the seed explicitly, using the set.seed() function. An example is given in the next listing. Here, the runif() function is used to generate pseudo-random numbers from a uniform distribution on the interval 0 to 1.
Listing 5.2 Generating pseudo-random numbers from a uniform distribution
> runif(5)
[1] 0.8725344 0.3962501 0.6826534 0.3667821 0.9255909 > runif(5)
[1] 0.4273903 0.2641101 0.3550058 0.3233044 0.6584988
>set.seed(1234)
>runif(5)
[1] 0.1137034 0.6222994 0.6092747 0.6233794 0.8609154
>set.seed(1234)
>runif(5)
[1] 0.1137034 0.6222994 0.6092747 0.6233794 0.8609154
By setting the seed manually, you’re able to reproduce your results. This ability can be helpful in creating examples you can access in the future and share with others.
GENERATING MULTIVARIATE NORMAL DATA
In simulation research and Monte Carlo studies, you often want to draw data from a multivariate normal distribution with a given mean vector and covariance matrix. The mvrnorm() function in the MASS package makes this easy. The function call is
mvrnorm(n, mean, sigma)
where n is the desired sample size, mean is the vector of means, and sigma is the vari- ance-covariance (or correlation) matrix. Listing 5.3 samples 500 observations from a three-variable multivariate normal distribution for which the following are true:
Mean vector |
230.7 |
146.7 |
3.6 |
Covariance matrix |
15360.8 |
6721.2 |
-47.1 |
|
6721.2 |
4700.9 |
-16.5 |
|
-47.1 |
-16.5 |
0.3 |
|
|
|
|
Listing 5.3 Generating data from a multivariate normal distribution
> library(MASS) |
b Sets the random number seed |
||
> |
options(digits=3) |
||
> |
set.seed(1234) |
|
|
|
|
>mean <- c(230.7, 146.7, 3.6)
>sigma <- matrix(c(15360.8, 6721.2, -47.1,
6721.2, 4700.9, -16.5,
-47.1, -16.5, 0.3), nrow=3, ncol=3)
c Specifies the mean vector and covariance matrix