R: advanced statistical package

About R
R is a general-purpose, command-line based, environment for working with data. R is based on a functional approach to working with vectors and matrices of data. R is a convenient environment for processing, analyzing, and plotting data.

Soils-Related R Packages
The 'aqp' (Algorithms for Quantitative Pedology) package was developed to facilitate numerical extensions to classical studies of soil geography, genesis and classification. [CRAN] [R-Forge]

R in the News

Getting Started

Searching for Information

R with Geographic Data

Misc. Articles

Access Data Stored in a Postgresql Database

Perform some temporal aggregation (by day and by week) of the amount of data entry completed in Postgresql, and plot the results in R. See resulting figure at the bottom of the page. Note that this requires the Rdbi and RdbiPgSQL packages. Hints on installing these packages can be found on this page...

Weekly Aggregation hints from the psql manual page

 SELECT week, count(week) AS entered
SELECT pedon_id, creation_date, extract( week FROM creation_date) AS week
FROM description
ORDER BY creation_date ASC
) AS a
GROUP BY a.week
ORDER BY week;

Daily Aggregation hints from the psql manual page

SELECT doy, count(doy) AS entered
SELECT pedon_id, creation_date, extract( doy FROM creation_date) AS doy
FROM description
ORDER BY creation_date ASC
) AS a
GROUP BY a.doy

R Example

 ##### load the samme data in from PgSQL #####


# conn becomes an object which contains the DB connection:
conn <- dbConnect(PgSQL(), host="localhost", dbname="xxx", user="xxx", password="xxx")
# see if the connection works (should report the list of table(s) if table(s) are existing):
# dbListTables(conn)

## create an object which contains the SQL query:
query <- dbSendQuery(conn, "select pedon_id, hz_number, name, top, bottom, ((bottom - top)/2 + top) as avgdepth, matrix_wet_color_hue, matrix_wet_color_value, matrix_wet_color_chroma, matrix_dry_color_hue, matrix_dry_color_value, matrix_dry_color_chroma from horizon  order by pedon_id, hz_number ")

# fetch data according to query:
x <- dbGetResult(query)

# create an object which contains the SQL query:
query <- dbSendQuery(conn, "select doy, count(doy) as entered from (select pedon_id, creation_date, extract( doy from creation_date) as doy from description order by creation_date asc ) as a group by a.doy order by doy;")

# fetch data according to query:
y <- dbGetResult(query)

# setup plot environment

# plot cumulative progress, by week
plot(x$week, cumsum(x$entered), type='b', xlab='Week', ylab='Pedon Forms Completed', main='Weekly Progress')

# plot cumulative progress, by day
plot(y$doy, cumsum(y$entered), type='b', xlab='Day of Year', ylab='Pedon Forms Completed', main='Daily Progress')

Pedon entry progressPedon entry progress

Additive Time Series Decomposition in R: Soil Moisture and Temperature Data

Decagon Sensors: EC-5 (moisture) and ECT (temperature)

Simple demonstration of working with time-series data collected from Decagon Devices soil moisture and temperature sensors. These sensors were installed in a potted plant, that was semi-regularly watered, and data were collected for about 80 days on an hourly basis. Several basic operations in R are demonstrated:

  • reading raw data in CSV format
  • converting date-time values to R's date-time format
  • applying a calibration curve to raw sensor values
  • initialization of R time series objects
  • seasonal decomposition of additive time series (trend extraction)
  • plotting of decomposed time series, ACF, and cross-ACF

Process the raw sensor values with standard calibrations

## data from office plant: in potting soil
# raw data dump -- need to convert datetime + values:
x1 <- read.csv('office_plant_2.csv', head=FALSE)

# datetime is seconds from jan 1st 2000
t_0 <- as.POSIXlt(strptime('2000-01-01 00:00:00', format='%Y-%m-%d %H:%M:%S'))

# calibration for potting soil
raw_to_vwc <- function(d) {vwc <- (d * 0.00119) - 0.401 ; vwc }

# calibration for deg C
raw_to_temp <- function(d) {t <- log( (4095/d) - 1 ) ; t_c <-  25.02 + t * (-22.84 + t * (1.532 + (-0.08372 * t))) ; t_c}

# convert values
y1 <- data.frame(date=t_0 + x1$V1, m=raw_to_vwc(x1$V2), t=raw_to_temp(x1$V5))

# make a nice time axis
d.range <- range(y1$date)
d.list <- seq(d.range[1], d.range[2], by='week')

# note that there are several tricks here:
# stacking two plots that share an axis
# customized x-axis
# and manually adding a title with mtext()
par(mar = c(0.5, 4, 0, 1), oma = c(3, 0, 4, 0), mfcol = c(2,1))
plot(m ~ date, data=y1, type='l', ylab='VWC (EC-5 Sensor)', xaxt='n', las=2, cex.axis=0.75)
plot(t ~ date, data=y1, type='l', ylab='Deg. C (EC-T Sensor)', xaxt='n', las=2, cex.axis=0.75)
axis.POSIXct(at=d.list, side=1, format="%b-%d", cex.axis=0.75)
mtext('Potted Plant Experiment', outer=TRUE, line=2, font=2)

# save copy of raw data

Decompose each time series into additive components

# look at components of time series:
# we recorded measurements once and hour, so lets consider these data a on a daily-cycle
temp.ts <- ts(y1$t, freq=24)
vwc.ts <- ts(y1$m, freq=24)

# decompose additive time series with STL
# (Seasonal Decomposition of Time Series by Loess)
temp.stl <- stl(temp.ts, s.window=24)
vwc.stl <- stl(vwc.ts, s.window=24)

# these are referenced by day, so we need a new index for
# plotting meaningful dates on the x-axis
# generate the difference in days, from the first observations, at each date label
date.day_idx <- as.numeric((d.list - d.range[1]) / 60 / 60 / 24)

# note special syntax
par(mar = c(0, 4, 0, 3), oma = c(5, 0, 4, 0), mfcol = c(4,1), xaxt='n')
plot(temp.stl , main='Temperature (deg C)')
mtext(at=date.day_idx, text=format(d.list, "%b-%d"), side=1, cex=0.75)

# note special syntax
par(mar = c(0, 4, 0, 3), oma = c(5, 0, 4, 0), mfcol = c(4,1), xaxt='n')
plot(vwc.stl , main='Volumetric Water Content')
mtext(at=date.day_idx, text=format(d.list, "%b-%d"), side=1, cex=0.75)

Additive Time Series Decomposition: TemperatureAdditive Time Series Decomposition: Temperature

Additive Time Series Decomposition: Volumetric Water ContentAdditive Time Series Decomposition: Volumetric Water Content

Auto-Correlation Function (ACF)

# look at ACF: ind. time series, and cross-ACF
acf( ts.union(temp.ts, vwc.ts) )

# extract seasonal components from each sensor, union, and plot together
temp_vwc.ts <- ts.union(Temperature=temp.stl$time.series[,1], VWC=vwc.stl$time.series[,1])
plot(temp_vwc.ts, main='Seasonal Components', mar.multi= c(1, 5.1, 1, 1))

Soil Moisture and Temperature ACF: Auto-correlation function of each time series, and cross-ACF.Soil Moisture and Temperature ACF: Auto-correlation function of each time series, and cross-ACF.

Interesting Results
Variation in temperature with time dominated by diurnal fluctuations superposed over underlying fluctuations caused by building heating/cooling system. The magnitude of the diurnal cycle appears to be related to the moisture content- as expected due to high heat capacity of water. Diurnal variation in moisture values appears to account for less than < 2% absolute change in volumetric water content.


Aggregating SSURGO Data in R

SSURGO is a digital, high-resolution (1:24,000), soil survey database produced by the USDA-NRCS. It is one of the largest and most complete spatial databases in the world; and is available for nearly the entire USA at no cost. These data are distributed as a combination of geographic and text data, representing soil map units and their associated properties. Unfortunately the text files do not come with column headers, so a template is required to make sense of the data. Alternatively, one can use an MS Access template to attach column names, generate reports, and other such tasks. CSV file can be exported from the MS Access database for further use. A follow-up post with text file headers, and complete PostgreSQL database schema will contain details on implementing a SSURGO database without using MS Access.

If you happen to have some of the SSURGO tabular data that includes column names, the following R code may be of general interest for resolving the 1:many:many hierarchy of relationships required to make a thematic map.

This is the format we want the data to be in

    mukey     clay      silt      sand water_storage
   458581 20.93750 20.832237 20.861842     14.460000
   458584 43.11513 30.184868 26.700000     23.490000
   458593 50.00000 27.900000 22.100000     22.800000
   458595 34.04605 14.867763 11.776974     18.900000

So we can make a map like this
So we can make a map like this

Loading Data Into R

# need this for ddply()

# load horizon and component data
chorizon <- read.csv('chorizon_table.csv')

# only keep some of the columns from the component table
component <- read.csv('component_table.csv')[,c('mukey','cokey','comppct_r')]

Function Definitions

# custom function for calculating a weighted mean
# values passed in should be vectors of equal length
wt_mean <- function(property, weights)
        # compute thickness weighted mean, but only when we have enough data
        # in that case return NA
        # save indices of data that is there
        property.that.is.na <- which( is.na(property) )
                property.that.is.not.na <- which( !is.na(property) )
        if( length(property) - length(property.that.is.na) >= 1)
                prop.aggregated <- sum(weights[property.that.is.not.na] * property[property.that.is.not.na], na.rm=TRUE) / sum(weights[property.that.is.not.na], na.rm=TRUE)
                prop.aggregated <- NA


profile_total <- function(property, thickness)
        # compute profile total
        # in that case return NA
        # save indices of data that is there
        property.that.is.na <- which( is.na(property) )
                property.that.is.not.na <- which( !is.na(property) )
        if( length(property) - length(property.that.is.na) >= 1)
                prop.aggregated <- sum(thickness[property.that.is.not.na] * property[property.that.is.not.na], na.rm=TRUE)
                prop.aggregated <- NA


# define a function to perfom hz-thickness weighted aggregtion
component_level_aggregation <- function(i)

        # horizon thickness is our weighting vector
        hz_thick <- i$hzdepb_r - i$hzdept_r

        # compute wt.mean aggregate values
        clay <- wt_mean(i$claytotal_r, hz_thick)
        silt <- wt_mean(i$silttotal_r, hz_thick)
        sand <- wt_mean(i$sandtotal_r, hz_thick)
        # compute profile sum values
        water_storage <- profile_total(i$awc_r, hz_thick)

        # make a new dataframe out of the aggregate values
        d <- data.frame(cokey=unique(i$cokey), clay=clay, silt=silt, sand=sand, water_storage=water_storage)


mapunit_level_aggregation <- function(i)
        # component percentage is our weighting vector
        comppct <- i$comppct_r

        # wt. mean by component percent
        clay <- wt_mean(i$clay, comppct)
        silt <- wt_mean(i$silt, comppct)
        sand <- wt_mean(i$sand, comppct)
        water_storage <- wt_mean(i$water_storage, comppct)

        # make a new dataframe out of the aggregate values
        d <- data.frame(mukey=unique(i$mukey), clay=clay, silt=silt, sand=sand, water_storage=water_storage)


Performing the Aggregation

# aggregate horizon data to the component level
chorizon.agg <- ddply(chorizon, .(cokey), .fun=component_level_aggregation, .progress='text')

# join up the aggregate chorizon data to the component table
comp.merged <- merge(component, chorizon.agg, by='cokey')

# aggregate component data to the map unit level
component.agg <- ddply(comp.merged, .(mukey), .fun=mapunit_level_aggregation, .progress='text')

# save data back to CSV
write.csv(component.agg, file='something.csv', row.names=FALSE)

Cluster Analysis 1: finding groups in a randomly generated 2-dimensional dataset

Cluster Analysis 1: 2 class example
Figure 1. Two class example
Cluster Analysis 1: 4 class example
Figure 2: Four class example
Cluster Analysis 1: 2 class example with 2-way fuzzy membership
Figure 3: 2-way fuzzy membership

Examples based on a random data set (see example code below), illustrating some of the differences between the K-means and C-means clustering methods as implemented in R. Next time an example with soil profile data collected from the Pinnacles National Monument soil survey efforts. An online version of the PINN soil survey will be available soon here.

Example in R:

## load required packages:
## make a dateset with 5 populations
x <- matrix( c(
rnorm(50, mean=.3, sd=.5),
rnorm(50, mean=.16, sd=.1),
rnorm(50, mean=.4, sd=.3),
rnorm(50, mean=.6, sd=.2),
rnorm(50, mean=.2, sd=.2)
), ncol=2)

## load function membership() : see attached file at bottom of page

## run an example with 2, then 4 classes: See Figures 1 and 2

## two-way fuzzy membership illustrated with color: See Figure 3
## display 2-way fuzzy membership
plot(x, main="C-means: 2-way Fuzzy Membership", type="n", xlab="Variable 1", ylab="Variable 2")
points(cc$centers, col = c("red", "blue"), pch = 8, cex=2)
points(x, col = rgb(cc$membership[,1], 0 ,cc$membership[,2]) , cex=0.5, pch=16)

Color Functions

Sample functions and ideas for accessing the R built-in colors. Further examples on converting soil colors to RGB triplets, or for the selection of optimal colors for a thematic map please see the examples linked at the bottom of this page. An excellent discussion on the use of color for presenting scientific data is presented in this paper by Zeileis, Achim and Hornik, Kurt.

R Color Selection: Simple figure illustrating the layout() function to create a plot of the built-in R colors palettes.R Color Selection: Simple figure illustrating the layout() function to create a plot of the built-in R colors palettes.

Simple Color Display

#make a color wheel
pie(rep(1,12), col=rainbow(12))

#make a list of the common color palettes:
demo.pal <- function(n, border = if (n<32) "light gray" else NA,
main = paste("color palettes;  n=",n),
ch.col = c("rainbow(n, start=.7, end=.1)", "heat.colors(n)",
"terrain.colors(n)", "topo.colors(n)", "cm.colors(n)"))
        nt <- length(ch.col)
        i <- 1:n; j <- n / nt; d <- j/6; dy <- 2*d
        plot(i,i+d, type="n", yaxt="n", ylab="", main=main)
        for (k in 1:nt) {
                rect(i-.5, (k-1)*j+ dy, i+.4, k*j,
                        col = eval(parse(text=ch.col[k])), border = border)
                text(2*j,  k * j +dy/4, ch.col[k])
n <- if(.Device == "postscript") 64 else 16
        # Since for screen, larger n may give color allocation problem

A Queryable color picker (as suggested by Gabor Grothendieck on the R-help mailing list)

#make a color lookup function
getColorName <- function(colorNumber) colors()[colorNumber]

# pch = NA means no points plotted.  pch = 20 plots small dots.
# n is the number of points to identify interactively with mouse
printColorSampler <- function(n = 0, pch = NA, bg = "white") {
   i <- seq(colors())
   k <- ceiling(sqrt(length(i)))
   xy <- cbind(floor(i/k)*2, i %% k)
   opar <- par(bg = bg)
   on.exit(par = opar)
   plot(xy, axes = FALSE, xlab = "", ylab = "", pch=pch, col=colors())
   text(xy[,1]+.5, xy[,2]+.2, i, col = colors(), cex = 0.7)
   if (n > 0)
      colors()[identify(xy, n = n, labels = colors(), plot = FALSE)]

# test
printColorSampler(pch=20, bg="black")

Setup the plot layout, and plot both examples

#setup the layout, and print pane boundaries:
nf <- layout(matrix(c(1,1,2,2), 2, 2, byrow=FALSE), respect=TRUE, widths=c(1,2)) ; layout.show(nf)

#plot the pie chart:
pie(rep(1,12), col=rainbow(12))
#plot the palette chart:

#save a copy to an EPS file:

Convert Munsell colors to computer-friendly RGB triplets

Soil color conversion: Munsell in LUV colorspace
Figure 1: Munsell color chips.
Soil color conversion: LUV colorspace
Figure 2: Common soil colors.
Soil color conversion: RGB colorspace
Figure 3: Commom soil colors in RGB.
Soil color conversion: soil color matrix
Figure 4: Soil colors in multiple color spaces
Soil color conversion: Soil Profile in RGB colorspace
Figure 5: Soil profile colors.

The Munsell color system was designed as a series of discrete color chips which closely approximation to the color sensitivity of the human eye. The description of color via three variables tied to perceptible properties (hue, value, and chroma) under a standardized illuminant (sunlight on a clear day) makes the Munsell system a good choice for recording and interpreting soil color data. However, numerical analysis of colors encoded in the Munsell system is difficult because they are from a discrete set of color chips and referenced by values that include both letters and numbers. Rossel et. al. (2006) give a good background on various color space models and their relative usefulness in the realm of soil science. The conversion of Munsell soil colors to RGB triplets, suitable for displaying on a computer screen or printing, is made complicated by the numerous operations involved in converting between color spaces. Figure 1 shows all possible (both real and unreal) Munsell color chips in the L*U*V color space. Figure 2 shows some of the common soil color chips in the same color space. Figures 2 through 5 depict common soil colors in the RGB color space, visualized both in R and POVRAY. Example R code on the conversion is given below.

Munsell color data can be downloaded here.
Color conversion equations here.


  1. Rossel, R.A.V.; Minasny, B.; Roudier, P. & McBratney, A.B. Colour space models for soil science Geoderma, 2006, 133, 320-337.

Manual Conversion in R

Setup environment and load lookup table data

## load some libs

## munsell data comes with a lookup table in xyY colorspace
## url: http://www.cis.rit.edu/mcsl/online/munsell.php

## note:
## Munsell chroma, CIE x, y, and Y. The chromaticity coordinates were calculated using illuminant C and the CIE 1931 2 degree observer.
all <- read.table("munsell-all.dat", header=T)

Convert xyY to XYZ [Equation Reference]

## x and y are approx (0,1)
## Y is approx (0,100)

## need manually rescale Y to (0,1)
all$Y <- all$Y/100.0

## do the conversion
X <- (all$x * all$Y ) / all$y
Y <- all$Y
Z <- ( (1- all$x - all$y) * all$Y )  / all$y

## combine to form matrix for simple manipulation
mun_XYZ_C <- matrix(c(X,Y,Z), ncol=3)

## test for y == 0
## X,Y,Z should then be set to 0
mun_XYZ_C[which(all$y==0),] <- c(0,0,0)

Perform Chromatic Adaption Functions in the colorspace package, and sRGB profiles assume a D65 illuminant [Reference]

## conversion matrix, from reference above
## this has been revised as of Jan, 2008
M_adapt_C_to_D65 <- matrix(c(0.990448, -0.012371, -0.003564, -0.007168, 1.015594, 0.006770, -0.011615, -0.002928, 0.918157), ncol=3, byrow=TRUE)

## perform the chromatic adaption: convert from C -> D65 using Bradford method
mun_XYZ_D65 <- mun_XYZ_C %*% M_adapt_C_to_D65

## how different are the two?
summary( (mun_XYZ_D65 - mun_XYZ_C)  )

Convert XYZ (D65) to sRGB (D65), step 1 this assumes that XYZ is scaled to (0,1) [Reference Primaries for sRGB]

## first get the reference primaries transformation matrix from above
## sRGB profile transformation:
M_XYZ_to_sRGB_D65 <- matrix(c(3.24071, -0.969258, 0.0556352, -1.53726, 1.87599, -0.203996, -0.498571, 0.0415557, 1.05707), ncol=3, byrow=TRUE)

## apply the conversion matrix
mun_sRGB_D65 <- mun_XYZ_D65 %*% M_XYZ_to_sRGB_D65

Convert XYZ (D65) to sRGB (D65), step 2 (sRGB, gamma = 2.4) [Conversion Function to sRGB]

## define the transformation functions:
## these are applied on a conditional basis:
fun1 <- function(col_comp) { 1.055 * ( col_comp ^ ( 1 / 2.4 ) ) - 0.055 }
fun2 <- function(col_comp) { 12.92 * col_comp }

## the specific function is contingent on the absolute value of r,g,b components
R <- ifelse(mun_sRGB_D65[,1] > 0.0031308, fun1(mun_sRGB_D65[,1]), fun2(mun_sRGB_D65[,1]))  
G <- ifelse(mun_sRGB_D65[,2] > 0.0031308, fun1(mun_sRGB_D65[,2]), fun2(mun_sRGB_D65[,2]))  
B <- ifelse(mun_sRGB_D65[,3] > 0.0031308, fun1(mun_sRGB_D65[,3]), fun2(mun_sRGB_D65[,3]))  

##clip values to range {0,1}
R_clip <- ifelse(R < 0, 0, R)  
G_clip <- ifelse(G < 0, 0, G)  
B_clip <- ifelse(B < 0, 0, B)  

R_clip <- ifelse(R > 1, 1, R_clip)  
G_clip <- ifelse(G > 1, 1, G_clip)  
B_clip <- ifelse(B > 1, 1, B_clip)

## add these back to the original table:
all$R <- R_clip
all$G <- G_clip
all$B <- B_clip

## done with the conversion

## the manually converted data
plot( as(RGB(R_clip,G_clip,B_clip), 'LUV'), cex=0.5)

Using ColorBrewer to assist with thematic map color selection

RColorBrewer Color Combinations
RColorBrewer color combinations
RColorBrewer Color Combinations: 3 colors
Figure 2: 3 colors per combination
RColorBrewer Color Combinations: 9 colors
Figure 3: 9 colors per combination

Choosing the right colors for classes in a thematic map can be a difficult task. The ColorBrewer website provides an interactive tool for browsing numerous color combinations. Each of the color combinations presented on the ColorBrewer website are the culmination of numerous color interpretation studies. In addition, there is a list of special color combinations suitible for audiences which may include color blind individuals.

The R package RColorBrewer adds the color brewer color combinations as well as functions for generating new combinations to R. Figure 1 demonstrates the available color combinations, as returned by the function display.brewer.all.

An example R session:

#load the RColorBrewer package [must be installed first with install.packages()]
#display the "sequential" color combinations, with 3 colors per combination
#See Figure 2
title("Sequential Color Combinations: 3 Colors per Combination")
#display the "sequential" color combinations, with 9 colors per combination
#See Figure 3
title("Sequential Color Combinations: 9 Colors per Combination")

# convert R colors into RGB triplets;
col2rgb( brewer.pal("Accent", n=5) )

Comparison of Slope and Intercept Terms for Multi-Level Model


When the relationship between two variable is (potentially) dependent on a third, categorical variable ANCOVA (analysis of covariance), or some variant, is commonly used. There are several approaches to testing for differences in slope/intercepts (in the case of a simple linear model) between levels of the stratifying variable. In R the following formula notation is usually used to test for interaction between levels of a factor (f) and the relationship between two continuous variables x and y: y ~ x * f. A simple graphical exploration of this type of model can be done through examination of confidence intervals computed for slope and intercept terms, for each level of our grouping factor (f). An example of a fictitious dataset is presented below. Note that this a rough approximation for testing differences in slope/intercept within a multi-level model. A more robust approach would take into account that we are trying to make several pair-wise comparisons, i.e. something akin to Tukey's HSD. Something like this can be done with the multcomp package. For any real data set you should always consult a real statistician.

Example Multi-Level Model: each panel represents a model fit to y ~ x, for group fExample Multi-Level Model: each panel represents a model fit to y ~ x, for group f

Example Multi-Level Data

# need this for xyplot()

# make some fake data:
x <- rnorm(100, mean=3, sd=6)
y <- x * runif(100, min=1, max=7) + runif(100, min=1.8, max=5)
d <- data.frame(x, y, f=rep(letters[1:10], each=10))

# check it out
xyplot(y ~ x | f, data=d, type=c('p','r'))


Example Multi-Level Model: Confidence Intervals: parameter estimates along with 95% confidence interval, within each level of our grouping factor (f).Example Multi-Level Model: Confidence Intervals: parameter estimates along with 95% confidence interval, within each level of our grouping factor (f).

Automated Plotting of Parameter Confidence Intervals

# split by factor
d.l <- split(d, d$f)
# fit model for each level of factor
fits <- lapply(d.l, function(d_i) {lm(y ~ x, data=d_i)})

# extract coefs
est <- lapply(fits, coef)

# compute confints
ci <- lapply(fits, confint)

ci.mat <- do.call('rbind', ci)
est.mat <- do.call('rbind', est)
ci.df <- data.frame(f=rep(colnames(sapply(ci, '[')), each=2))
ci.df$lower <- ci.mat[,1]
ci.df$upper <- ci.mat[,2]

# re-attach estimate label
ci.df$which <- row.names(ci.mat)

# add dummy column for estimate
ci.df$estimate <- NA

# make a data frame for the estimates
est.df <- data.frame(which=rep(colnames(est.mat), each=nrow(est.mat)))
est.df$estimate <- as.vector(c(est.mat[,1], est.mat[,2]))
est.df$f <- rep(row.names(est.mat), 2)

# add dummy columns for upper and lower conf ints
est.df$upper <- NA
est.df$lower <- NA

# combine estimate with confints
combined <- rbind(est.df, ci.df)

# combined plot of estimate +/- confint
dotplot(f ~ estimate + lower + upper | which, data=combined, scales=list(relation='free'), xlab="Estimate", ylab="Group", auto.key=list(columns=3),
par.settings=list(superpose.symbol=list(col=c(1), pch=c(16,1,1), cex=c(1,0.75,0.75))))

Formal Evaluation with lm()

The first two lines in the output below are testing the hypothesis that the slope and intercept term for level 'a' are not different than 0. Subsequent hypothesis tests are relative to the first 'level' in the dataset. In this case we are testing the hypothesis that intercept and slope terms for levels 'b' through 'j' are not different than the corresponding terms for level 'a'. From the output below we can see that none of the intercept terms (levels 'b' through 'j') are different than for 'a', and that the slope term for level 'd' is only marginally "different" (p=0.0625) than the slope term for 'a'.

Testing Model Terms

            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  4.99570    4.10909   1.216   0.2276    
x            4.40546    0.68230   6.457 7.68e-09 ***
fb          -4.66364    7.28233  -0.640   0.5237    
fc           1.10173    6.52890   0.169   0.8664    
fd           1.51033    6.20212   0.244   0.8082    
fe          -5.28549    6.62921  -0.797   0.4276    
ff          -1.37673    6.39280  -0.215   0.8300    
fg          -7.69480    5.93011  -1.298   0.1982    
fh          -2.34349    5.70703  -0.411   0.6824    
fi           1.14558    6.84805   0.167   0.8676    
fj          -1.12319    7.87523  -0.143   0.8869    
x:fb         0.92661    0.94257   0.983   0.3285    
x:fc         0.43454    1.04819   0.415   0.6796    
x:fd        -1.75956    0.93137  -1.889   0.0625 .  
x:fe        -0.08193    0.96216  -0.085   0.9323    
x:ff        -0.42669    0.99172  -0.430   0.6682    
x:fg         0.57531    0.99279   0.579   0.5639    
x:fh         1.63650    1.02319   1.599   0.1137    
x:fi        -0.38424    0.97753  -0.393   0.6953    
x:fj        -0.89373    1.14337  -0.782   0.4367    

Comparison of Slope and Intercept Terms for Multi-Level Model II: Using Contrasts


Small update to a similar thread from last week, on the comparison of slope and intercept terms fit to a multi-level model. I finally figured out (thanks R-Help mailing list!) how to efficiently use contrasts in R. The C() function can be called within a model formula, to reset the base level of an un-ordered factor. The UCLA Stats Library has an extensive description of this topic here. This approach can be used to sequentially test for differences between slope and intercept terms from a multi-level model, by re-setting the base level of a factor. See example data and figure below.

Note that the multcomp package has a much more robust approach to this type of operation. Details below.

Example Multi-Level Data

# need these

# replicate an important experimental dataset
x <- rnorm(100)
y1 <- x[1:25] * 2 + rnorm(25, mean=1)
y2 <- x[26:50] * 2.6 + rnorm(25, mean=1.5)
y3 <- x[51:75] * 2.9 + rnorm(25, mean=5)
y4 <- x[76:100] * 3.5 + rnorm(25, mean=5.5)
d <- data.frame(x=x, y=c(y1,y2,y3,y4), f=factor(rep(letters[1:4], each=25)))

# plot
xyplot(y ~ x, groups=f, data=d,
auto.key=list(columns=4, title='Beard Type', lines=TRUE, points=FALSE, cex=0.75),
type=c('p','r'), ylab='Number of Pirates', xlab='Distance from Land')

Example Multi-Level Model IIExample Multi-Level Model II

Default Contrasts (contr.treatment for regular factors, contr.poly for ordered factors)

# standard comparison to base level of f
summary(lm(y ~ x * f, data=d))

# output:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   1.0747     0.1889   5.689 1.51e-07 ***
x             1.9654     0.1799  10.927  < 2e-16 ***
fb            0.3673     0.2724   1.348   0.1808    
fc            4.1310     0.2714  15.221  < 2e-16 ***
fd            4.4309     0.2731  16.223  < 2e-16 ***
x:fb          0.5951     0.2559   2.326   0.0222 *  
x:fc          1.0914     0.2449   4.456 2.35e-05 ***
x:fd          1.3813     0.2613   5.286 8.38e-07 ***
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Setting the "base level" in the Model Formula This allows us to compare all slope and intercept terms to the slope and intercept from level 4 of our factor ('d' in our example).

# compare to level 4 of f
summary(lm(y ~ x * C(f, base=4), data=d))

# output:
                  Estimate Std. Error t value Pr(>|t|)    
(Intercept)         5.5055     0.1972  27.911  < 2e-16 ***
x                   3.3467     0.1896  17.653  < 2e-16 ***
C(f, base = 4)1    -4.4309     0.2731 -16.223  < 2e-16 ***
C(f, base = 4)2    -4.0635     0.2783 -14.603  < 2e-16 ***
C(f, base = 4)3    -0.2999     0.2773  -1.081  0.28230    
x:C(f, base = 4)1  -1.3813     0.2613  -5.286 8.38e-07 ***
x:C(f, base = 4)2  -0.7862     0.2628  -2.992  0.00356 **
x:C(f, base = 4)3  -0.2899     0.2521  -1.150  0.25327    
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Testing with Multcomp Package using data from above example

# need these

# open this vignette, lots of good information
vignette("generalsiminf", package = "multcomp")

# fit two models
l.1 <- lm(y ~ x + f, data=d)
l.2 <- lm(y ~ x * f, data=d)

# note that: tests are AGAINST the null hypothesis

# see the plotting methods:

# pair-wise comparisons
summary(glht(l.1, linfct=mcp(f='Tukey')))

# pair-wise comparisons
# may not be appropriate for model with interaction
summary(glht(l.2, linfct=mcp(f='Tukey')))

# when variance is not homogenous between groups:
summary(glht(l.1, linfct=mcp(f='Tukey'), vcov=sandwich))

Computing Statistics from Poorly Formatted Data (plyr and reshape packages for R)

I was recently asked to verify the coefficients of a linear model fit to sets of data, where each row of the input file was a "site" and each column contained the dependent variable through time (i.e. column 1 = time step 1, column 2 = time step 2, etc.). This format is cumbersome in that it cannot be directly fed into the R lm() function for linear model fitting. Furthermore, we needed the output formatted with columns containing slope, intercept, and R-squared values for each site (rows). All of the re-formatting, and model fitting can be done by hand, using basic R functions, however this task seemed like a good case study for the use of the reshape and plyr packages for R. The reshape package can be used to convert between "wide" and "long" format-- the first step in the example presented below. The plyr package can be used to split a data set into subsets (based on a grouping factor), apply an arbitrary function to the subset, and finally return the combined results in several possible formats. The original input data, desired output, and R code are listed below.


2.521 2.312 2.720 2.254 * 2.922 * * 2.291 2.038 * * 1.151
1.675 1.646 1.860 2.517 * 1.986 * * 3.279 3.420 * * 3.059
1.734 1.305 1.774 2.366 * 2.909 * * 2.863 2.958 * * 2.973
1.637 1.632 2.040 1.807 * 1.889 * * 2.081 2.267 * * 2.655
1.967 8.307 8.331 8.698 * 8.236 * * 7.990 8.255 * * 8.041
1.670 1.744 1.982 2.029 * 2.159 * * 3.330 2.945 * * 3.301
1.668 1.816 1.832 2.100 * 2.289 * * 2.745 2.703 * * 3.216
2.304 2.413 2.749 2.827 * 2.978 * * 3.011 3.244 * * 4.494
1.505 2.827 3.375 1.923 * 4.250 * * 1.542 3.094 * * 1.480


site intercept slope Rsq
1 1 2.8123 -0.0894 0.5115
2 2 1.5229 0.1512 0.7682
3 3 1.5499 0.1351 0.7445
4 4 1.5581 0.0738 0.8453
5 5 6.1738 0.2174 0.1727
6 6 1.4787 0.1527 0.9026
7 7 1.5340 0.1270 0.9871
8 8 2.1403 0.1437 0.8224
9 9 2.7546 -0.0425 0.0306

Add required libraries and load example data files

# these may need to be installed with install.packages()
# this one comes with the base install of R

# read in the data as pasted from original format
d1 <- read.table('d1.txt', na.strings='*')
d2 <- read.table('d2.txt')

Reshape data

# transpose and convert to long format
d1.long <- melt(t(d1))
d2.long <- melt(t(d2))

# give resonable names
names(d1.long) <- c('obs','site','value')
names(d2.long) <- c('obs','site','value')

# add time variable
#  1:n obs * m sites
d1.long$time <- rep(1:13, 9)
d2.long$time <- rep(1:8, 7)

Visually check patterns

xyplot(value ~ time | factor(site), type=c('p','r'), data=d1.long, as.table=TRUE)
xyplot(value ~ time | factor(site), type=c('p','r'), data=d2.long, as.table=TRUE)

Extract linear model terms and R-squared for each subset

fit.summary <- function(i)
        # fit linear model to this set of the data
        l <- lm(value ~ time, data=i)
        # extract model terms
        l.coef <- coef(l)
        # extract R-squared
        l.rsq <- summary(l)$r.squared
        # combine model details into a single vector
        l.details <- c(l.coef, l.rsq)
        # rename elements of the vector
        names(l.details) <- c('intercept', 'slope', 'Rsq')
        # return rounded values to the calling function
        return(round(l.details, 4))

# compute lm details by site
ddply(d1.long, .(site), fit.summary)
ddply(d2.long, .(site), fit.summary)

Creating a Custom Panel Function (R - Lattice Graphics)

The Experiment
It was necessary (for the purposes of this exercise) to generate some grouped data worthy of a creative panel function. An experiment was designed to test the coordination of 4 individuals (each a panel in the figure below), as a function of "clarity of mind" (symbol color in the figure below). The actual details of the experiment can be deduced from the attached data file, and careful inspection of the resulting plot. A similar experiment was conducted some time ago to demonstrate the Spatstat package in R.

A Customized Panel Function for Lattice Graphics -- "panel.bulls_eye()"
Lattice graphics are one of several possible visualization methods in available in R that are most useful when working with grouped data. Plots are generated via a formula interface, often in the format of y ~ x | f -- where y is the dependent variable, x is the independent variable, and f is a grouping factor. Have a look at the attached file (bottom of page) for an example of data in this format. Each panel in the plot is generated by a panel function, using a subset of the original data as defined by the grouping variable. In most situations the standard panel functions, such as panel.xyplot, are sufficient. However, when working with more "interesting" data, a customized panel function is the way to go.

In order to try the sample code out, you will need to:

  1. install the required packages
  2. copy and paste the panel.bulls_eye function source into an R session
  3. download the sample data file
  4. run the code listed in the sample R session

Since panel functions are made to be generic, any data source that is similar in nature to the sample can be directly plotted using this code-- i.e. if the experiment were to be repeated using 8 subjects instead of 4. Enjoy.

Panel Function Source

panel.bulls_eye <- function(x, y, groups, subscripts, ...)
# setup the initial plot, and add the raw data
panel.xyplot(jitter(x), jitter(y), groups=groups, subscripts=subscripts, cex=1.25, pch=3, col=c(1,2,3), ...)

# add the bull's eye
panel.points(0, 0, pch=16, cex=0.25, col='grey')
panel.points(0, 0, pch=1, cex=1.75, col='grey')
panel.points(0, 0, pch=1, cex=4, col='grey')
panel.points(0, 0, pch=1, cex=7, col='grey')

# compute the mean cartesian distance from the bull's eye to all points
z <- signif(mean(sqrt(x^2 + y^2)), 3)
z.text <- paste(z, 'cm')

# compute the mean angle between all points and bull's eye
theta <- circ.mean(atan2(y, x))
theta.text <- paste(signif(theta* 180/pi, 2), 'deg')

# generate a displacement vector
x_prime <- z * cos(theta)
y_prime <- z * sin(theta)

# add the vector to the plot
panel.segments(0, 0, 3, 0, col='grey')
panel.arrows(0, 0, x_prime, y_prime, length=0.1, col='black')

# annotate with accuracy and displacement angle
grid.text(label = z.text, gp=gpar(fontsize=16), just='left',
              x = unit(0.05, "npc"),
              y = unit(0.95, "npc"))
grid.text(label = theta.text, gp=gpar(fontsize=16), just='right',
              x = unit(0.90, "npc"),
              y = unit(0.95, "npc"))

Example Session (note that several packages are required)

# load required libraries

# read in our data (see attached file)
x <- read.csv('beer_battle.csv')

# plot the data, as stratified by person
xyplot(y ~ x | person, groups=beer, data=x, panel=panel.bulls_eye,
key=list(points=list(col=c(1,2,3), pch=c(3,3,3)), text=list(c('0 beers', '1 beer', '3 beers')), columns=3),
main='Beer Battle 1'

Results: example output from the panel.bulls_eye() function used with xyplot().Results: example output from the panel.bulls_eye() function used with xyplot().

Customized Scatterplot Ideas

Panel function for visualizing univariate statistics

panel.dist_summary <- function(x, ...)
  usr <- par("usr"); on.exit(par(usr))
  par(usr = c(usr[1:2], 0, 3) )
  #hist(x, probability=T, add=T)
  #default color and line style for density plot
  density.col = 'gray'
  density.lty = 3
  # is this a normally distributed dataset?
  # if so, change the color of the density plot
  # The test rejects the null hypothesis if W is too small.
  s.W <- shapiro.test(x)$statistic
 if( (s.W > 0.91) == TRUE)
   density.col = 'gray'
   density.lty = 1
  # compute and plot density
  d <- density(x)
  dy <- d$y / max(d$y) * .5
  lines(d$x, dy, col=density.col, lty=density.lty)
  # get a small increment to use in the next tests:
  delta <- abs(min(x) - max(x)) / 100
  y_mean <- dy[d$x < mean(x) + delta & d$x > mean(x) - delta][1]
  y_median <- dy[d$x < median(x) + delta & d$x > median(x) - delta][1]
  #add points on the density plot for the mean and median
  points( c(mean(x), median(x)), c(y_mean, y_median), col=c('red', 'orange'), pch=16)
  #add a boxplot
  boxplot(x, horizontal=TRUE, boxwex=0.3, add=T)

Panel function for printing joint correlation statistic

panel.cor <- function(x, y, digits=2, prefix="", cex.cor, cor.method="pearson")
  usr <- par("usr"); on.exit(par(usr))
  par(usr = c(0, 1, 0, 1))
  r <- abs(cor(x, y, method=cor.method))
  txt <- format(c(r, 0.123456789), digits=digits)[1]
  txt <- paste(prefix, txt, sep="")
  if(missing(cex.cor)) cex <- 0.8/strwidth(txt)
  text(0.5, 0.5, txt, cex = cex * r, col='gray')
  # might be interesting to use ks.test
  # http://www.physics.csbsju.edu/stats/KS-test.html

Example usage with built-in datasets

# enforce square plotting area
pairs(USJudgeRatings[1:5], upper.panel=panel.cor, lower.panel=function(...) panel.smooth(..., col.smooth=gray(.5), lty=1), diag.panel=panel.dist_summary, cex.labels = 2, font.labels=2)
pairs(iris[1:4], upper.panel=panel.cor, lower.panel=function(...) panel.smooth(..., col.smooth=gray(.5), lty=1), diag.panel=panel.dist_summary, cex.labels = 2, font.labels=2)
# use spearman correlation calculation instead of default person:
pairs(iris[1:4], upper.panel=function(...) panel.cor(..., cor.method="spearman"), lower.panel=function(...) panel.smooth(..., col.smooth=gray(.5), lty=1), diag.panel=panel.dist_summary, cex.labels = 2, font.labels=2)
# color iris specis: note location of 'col=' argument
pairs(iris[1:4], upper.panel=panel.cor, lower.panel=function(...) panel.smooth(..., col.smooth=gray(.5), lty=1, pch=16, col=c("red4", "green3", "blue4")[unclass(iris$Species)]), diag.panel=panel.dist_summary, cex.labels = 2, font.labels=2 )
pairs(trees, upper.panel=panel.cor, lower.panel=function(...) panel.smooth(..., col.smooth=gray(.5), lty=1), diag.panel=panel.dist_summary, cex.labels = 2, font.labels=2)
pairs(swiss, upper.panel=panel.cor, lower.panel=function(...) panel.smooth(...,
col.smooth=gray(.5), lty=1), diag.panel=panel.dist_summary, cex.labels = 2, font.labels=2)
# using formula notation:
pairs( ~ Fertility + Education + Catholic, data=swiss, upper.panel=panel.cor, lower.panel=function(...) panel.smooth(..., col.smooth=gray(.5), lty=1), diag.panel=panel.dist_summary, cex.labels = 2, font.labels=2)
pairs(longley, upper.panel=panel.cor, lower.panel=function(...) panel.smooth(...,
col.smooth=gray(.5), lty=1), diag.panel=panel.dist_summary, cex.labels = 2, font.labels=2)

Estimating Missing Data with aregImpute() {R}

Missing Data
Soil scientists routinely sample, characterize, and summarize patterns in soil properties in space, with depth, and through time. Invariably, some samples will be lost or sufficient funds required for complete characterization can run out. In these cases the scientist is left with a data table that contains holes (so to speak) in the rows/columns that are missing data. If the data are used within a regression, missing values in any of the predictor or the response variable result in row-wise deletion-- even if 9/10 variables are present. Furthermore, common multivariate methods (PCA, RDA, dissimilarity metrics, etc.) cannot effectively deal with missing data. The scientist is left with a couple options: 1) row-wise deletion of cases missing any variable, 2) re-sampling or re-characterizing the missing samples, or 3) estimating the missing values from other variables in the dataset. This last option is called missing data imputation. This is a broad topic with countless books and scientific papers written about it. Here is a fairly simple introduction to the topic of imputation. Fortunately for us non-experts, there is an excellent function (aregImpute()) in the Hmisc package for R.

Below is an example of filling missing data in a soil characterization database with the aregImpute function. For each missing value, 10 candidate multiple imputations are returned. Otherwise, the function is using default parameters-- there are a lot of options, so reading the manual page is highly recommended! From the example below, it looks like we are able to adequately predict missing observations in most variables-- R2 values are all > 0.5 - 0.6. Note that we are using the aregImpute function to automatically find the "best model" for predicting missing values (for each variable).


## impute missing data: with aregImpute
# updated version of methods used in transcan
# multiple impution, requesting 10 candidate values per NA
x.ar <- aregImpute(~ L + A + B + clay + silt + sand + ph + fe_d + fe_o + mn_d + mn_o + Fe + Ca + K + Al + Si + Ti + Zr + Rb + S + Zn, data=x, n.impute=10)

# R-squares for Predicting Non-Missing Values for Each Variable Using Last Imputations of Predictors
# not bad!
    L     A     B  clay  silt  sand    ph  fe_d  fe_o  mn_d  mn_o    Fe    Ca
0.949 0.933 0.934 1.000 1.000 1.000 0.567 0.950 0.597 0.906 0.902 0.913 0.844
    K    Al    Si    Ti    Zr    Rb     S    Zn
0.860 0.839 0.829 0.885 0.886 0.885 0.680 0.730

I am interested in replacing missing data with the mean of the multiple imputations for each case. The following code below demonstrates one possible approach. However, this is not the suggested approach for incorporating the imputed values into subsequent analysis! Regression models should be iteratively fit to data containing a single value of each multiple imputation, and model coefficients combined according to rules for mixture distributions. (Thanks for the tip Cyrus). There are functions within the Hmisc, rms, and Zelig packages for automating these procedures.

Implementation (slightly improper use of multiple imputation)

# get a list of those variables with imputed values
imp.vars <- names(x.ar$imputed)

# compute mean imputed value for each variable
# and extract the original indices of the missing data, by variable
imp.vars.mean <- lapply(x.ar$imputed, function(i) apply(i, 1, mean))
imp.vars.idx <- lapply(imp.vars.mean, function(i) as.integer(names(i)))

# copy origial data
x.no.na <- x

# loop over imputed variables
for(i in imp.vars)
        # get the mean imputations for this variable
        imp.i <- imp.vars.mean[[i]]
        # get the original indices for NA
        idx.i <- imp.vars.idx[[i]]
        # replace original NA with imputed values
        x.no.na[idx.i, i] <- imp.i

Exploration of Multivariate Data



# chernoff faces
faces(aggregate(wine, list(wine$Class), FUN=mean)[,-c(1,2)], ncol=3, nrow=1)

# LDA on wine data
l <- lda(Class ~ . , data=wine)
plot(l, col=wine$Class)

## Some soils data from the car package
# LDA: all horizons
l <- lda(Contour ~ pH + N + Dens + P + Ca + Ca + Mg + K + Na, data=Soils)
plot(l, col=as.numeric(Soils$Contour))

# just the top horizon
l <- lda(Contour ~ pH + N + Dens + P + Ca + Ca + Mg + K + Na, data=Soils, subset=Depth=='0-10')
plot(l, col=as.numeric(Soils$Contour[Soils$Depth == '0-10']))

Interactive 3D plots with the rgl package

Sample application of the RGL package. This package allows for the creation of interactive, 3D figures, complete with lighting and material effects. Try demo(rgl) for an idea of what is possible.

A random number generator sphere (RNG sphere) was created based on the suggestions in Keys to Infinity by Clifford A. Pickover, pp. 237-239. The RNG sphere can be used to test the robustness of a random number generator. Three random number generators were tested: runif() from R, rand from Excel, and a logistic-derived psudo-random number generator. The location (x,y,z) and color of the spheres are based on the sequence of random numbers (Pickover, 1995). An ideal RNG shpere should have no discernable patterns. Note that the logistic-derived random numbers show distinct correlation in the RNG sphere. Excel random number list, and source code (R) are attached at the botom of the page.

RGL sample application: 3d interactive interface to a random number generator sphere.RGL sample application: 3d interactive interface to a random number generator sphere. Random numbers from runif() function in R.

RGL sample application 2: Excel random number visualization: 3d interactive interface to a random number generator sphere. Random numbers from rand() function in MS Excel.RGL sample application 2: Excel random number visualization: 3d interactive interface to a random number generator sphere. Random numbers from rand() function in MS Excel.

RGL sample application 2: Random numbers from the logistic function: 3d interactive interface to a random number generator sphere. Random numbers from the logistic function (see notes), implemented in R.RGL sample application 2: Random numbers from the logistic function: 3d interactive interface to a random number generator sphere. Random numbers from the logistic function (see notes), implemented in R.

Random Number Generator (RNG) Sphere Function Definition

 # simple function for
rng_sphere <- function(d, type='rgl')

n <- length(d)
nn <- n - 3

# init our x,y,z coordinate arrays
x <- array(dim=nn)
y <- array(dim=nn)
z <- array(dim=nn)

# init red,green,blue color component arrays
cr <- array(dim=nn)
cg <- array(dim=nn)
cb <- array(dim=nn)

# convert lagged random numbers from d into spherical coordinates
# then convert to cartesian x,y,z coordinates for simple display
for (i in 1:nn)
theta <- 2*pi*d[i]
phi <- pi*d[i+1]
r <- sqrt(d[i+2])

x[i] <- r * sin(theta) * cos(phi)
y[i] <- r * sin(theta) * sin(phi)
z[i] <- r * cos(theta)

# give each location a color based on some rules
cr[i] <- d[i] / max(d)
cg[i] <- d[i+1] / max(d)
cb[i] <- d[i+2] / max(d)

} # end function

if( type == 'rgl')
# setup rgl environment:
zscale <- 1
# clear scene:
# setup env:
# draw shperes in an rgl window
spheres3d(x, y, z, radius=0.025, color=rgb(cr,cg,cb))

if(type == '2d')
# optional scatterplot in 2D
scatterplot3d(x,y,z, pch=16, cex.symbols=0.25, color=rgb(cr,cg,cb), axis=FALSE )

# optionally return results
# list(x=x, y=y, z=z, red=cr, green=cg, blue=cb)



 # load required packages

# random number with runif():
d <- runif(2500)

# plot rng sphere with rgl:
rng_sphere(d, type='rgl')

# save results of the rgl window

# plot rng sphere with scatterplot3d:
rng_sphere(d, type='2d')

# save results

# 2500 excel random numbers
# =rand()
dd <- as.vector(unlist(read.csv('excel_rand.csv')))
rng_sphere(dd, type='rgl')

# 1000 random numbers from the logistic function:

# init an array
ddd <- array(dim=1000)

# seed
ddd[1] <- 0.38273487234

# compute for the next 999 iterations
for (i in 1:999) { ddd[i+1] <- 4 * 1 * ddd[i] * (1 - ddd[i]) }

Making Soil Property vs. Depth Plots

Example with randomly generated data

Generate some data

## generate some profile depths: 0 - 150, in 10 cm increments
depth <- seq(0,150, by=10)

## generate some property: random numbers in this case
prop <- rnorm(n=length(depth), mean=15, sd=2)

## since the 0 is not a depth, and we would like the graph to start from 0
## make the first property row (associated with depth 0) the same as the second
## property row
prop[1] <- prop[2]

## combine into a table: data read in from a spread sheet would already be in this format
soil <- data.frame(depth=depth, prop=prop)

The dataframe 'soil' looks like this:

   depth     prop
1      0 13.80257  ** note that these are the same
2     12 13.80257  ** note that these are the same
3     24 18.40298
4     36 13.37446
5     48 13.27973
6     60 14.65288
7     72 16.07339
8     84 15.97451
9     96 16.29970
10   108 16.32155
11   120 14.63699
12   132 13.26486
13   144 13.81730

Plot the data:

## note the reversal of the y-axis with ylim=c(150,0)
plot(depth ~ prop, data=soil, ylim=c(150,0), type='s', ylab='Depth', xlab='Property', main='Property vs. Depth Plot')

Additional Example Using Lattice Graphics

Examples with Some Real Data


  • See attached files at bottom of page
  • Helper function could use some generalization. Until then, your data will need to have the columns:
    1. pedon_id
    2. top
    3. bottom
  • These examples require a recent version of R and Lattice (>= 2.5.1)

Helper Function (copy this into your R session first)

## function to add a repeat top horizon
## for correct step-plot
## assumes that there are columns named pedon_id, bottom, top
profile_fix <- function(d)
## init some vars
p <- levels(d$pedon_id)
idx <- array()
i <- 1

## loop over pedon ids
for(p.id in p)
## extract one at a time
p.row <- subset(d, subset=pedon_id == p.id)

## make a list of the positions where bottom horizons occur
idx[i] <- which(d$top == min(p.row$top) & d$pedon_id == p.id)

## increment counter
i <- i+1

## extract out bottom horizons
d.temp <- d[idx,]

## set the top of these to the bottom boundary
d.temp$bottom <- d.temp$top

## add duplicate bottom horizon records, with top set to bottom
d.new <- rbind(d, d.temp)


Load Data and Packages

## load libs
## read in the first example
x <- read.csv('psa.csv')
## convert pedon_id to a factor:
x <- transform(x, pedon_id = factor(pedon_id))
## add extra top horizon
x.new <- profile_fix(x)
## read in the second example
y <- read.csv('example_data.csv')
##  add the extra top horizon
y.new <- profile_fix(y)

Example 1

## plot using step function
## note special syntax: horizontal=TRUE
xyplot(bottom ~ sand + silt + clay | pedon_id, horizontal=TRUE,
data=x.new, ylim=c(160,-5), type='s', auto.key=TRUE,
col=c('Orange', 'RoyalBlue', 'Dark Green'), lty=c(2,2,1), lwd=c(1,1,2),
ylab='Depth (cm)', xlab='Percent Sand, Silt, Clay',
lines=list(col=c('Orange', 'RoyalBlue', 'Dark Green'), lwd=c(1,1,2), lty=c(2,2,1)
c('Sand', 'Silt', 'Clay')

Depth Profile Example 1: sand, silt, and clay vs. depth for three pedonsDepth Profile Example 1: sand, silt, and clay vs. depth for three pedons

Example 2

## plot with step function
xyplot(bottom ~ field_pct_clay | pedon_id, horizontal=TRUE,
data=y.new, ylim=c(250,-5), type='s', as.table=TRUE,
ylab='Depth (cm)', xlab='Percent Clay', lwd=2, col='black'

Depth Profile Example 2: clay vs. depth for 9 pedonsDepth Profile Example 2: clay vs. depth for 9 pedons

New R Package 'aqp': Algorithms for Quantitative Pedology [updates]

Soils are routinely sampled and characterized according to genetic horizons (layers), resulting in data that are associated with principal dimensions: location (x,y), depth (z), and property space (p). The high dimensionality and grouped nature of this type of data can complicate standard analysis, summarization, and visualization. The aqp package was developed to address some of these issues, as well as provide a useful framework for the advancement of quantitative studies in soil genesis, geography, and classification.

The Algorithms for Quantitative Pedology (AQP) project was started as a place to put some of the functions I used on a daily basis. It slowly grew into a collection of algorithms that support studies related to soil genesis, taxonomy, and mapping. The short list of functionality includes:

  • classes for storing and manipulating soil profile data
  • functions for plotting soil profile sketches
  • color conversion functions (Munsell to RGB)
  • plotting functions for groups of soil profiles
  • plotting functions for depth-functions + uncertainty estimates
  • functions for re-sampling genetic horizons to regular sequences
  • profile aggregation functions
  • an implementation of between-profile dissimilarity calculation
  • functions for re-sampling XRD patterns to a common basis
  • functions for performing full-pattern matching of XRD patterns

Obviously, this project is far from an exhaustive implementation of what the name might otherwise suggest.

My reasoning for putting this package together is based on the premise that methods used by pedologists have grown with our understanding of natural systems and our ability to work with larger and larger datasets-- however, coupling of theory and practice to modern numerical methods is rare outside of academic circles. That is not to say that federal agencies and the soil science community are not catching up. Rather, it seems like the adoption of these new approaches can by hindered by a lack of an implementation that is more accessible than an article in a scientific journal. I am speaking from my experience in the western United States, therefore I apologies if I have not adequately described corresponding conditions in other parts of the world.

The naming of the package. It is currently named "Algorithms for Quantitative Pedology", inspired by Jenny's treatise on the topic. It could be argued that most modern pedologic studies (i.e. on the topic of soil genesis, morphology, classification, and mapping) are by and large a quantitative effort-- so this name may be more of a convenient acronym than descriptive label. In naming
this package I was hoping to encapsulate the idea of how modern numerical methods might be applied to the field of pedology. Well, the word "Pedometrics" does a pretty good job of conveying that idea. In particular, it falls well in line with Alex's initial description of Pedometrics: “the application of mathematical and statistical methods for the study of the distribution and genesis of soils”.

On the other hand, the potential for a diffuse interpretation of the word "Pedometrics" could lead to a package that lacks focus and ultimately critical mass. In that case, an entire suite of related R packages would likely work. There are several cases on CRAN / R-Forge where large efforts are split into several, related packages. In addition, the term 'Pedometrics' means many different things to many different people.

I would like to present a couple ideas on how this project, and extensions of it, might function as a liaison between rapid advances in the realm of Pedometrics and those interested in the application of these methods.

  1. R is an ideal environment for working with soils information and the packaging system is a robust approach to encapsulating code, documentation, and discussion.
  2. A suite of R packages, each geared towards a specific realm of what most soil scientists do (or would like to do) with their data, may be one way in which common work flows can be streamlined and better documented. These packages would be the vehicle by which theory, defined in academic journals, could be translated into well-documented tools for a wider range of practitioners.
  3. Within this suite, a master package (why not call it "pedometrics") could contain classes and utility functions for describing and manipulating objects that represent the complex structure of a soil profile. The companion packages would build on these common data structures and methods, making the implementation of newly discovered or experimental methods much simpler.
  4. The R-Forge system would facilitate concurrent access to the source code and documentation through the use of revision control. In this way, changes would be documented, and versions could be seamlessly forked or merged so that rapid development would not result in an unstable product.

Ultimately, I am interested in getting modern tools into the hands of current pedologists, and making it simpler for the adoption of these tools by new pedologists. If you are interested in these efforts, have comments or criticisms please don't hesitate to contact me. There is a vignette with an extended discussion on the AQP package, included in the package. I would encourage anyone that is interested to check the R-Forge site. A slightly older version is available on CRAN. Give the package a try by installing R, and then running:

# install the package
install.packages("aqp", repos="http://R-Forge.R-project.org", dep=TRUE, type='source')

# load the package

# open the mini-manual

Numerical Integration/Differentiation in R: FTIR Spectra

Stumbled upon an excellent example of how to perform numerical integration in R. Below is an example of piece-wise linear and spline fits to FTIR data, and the resulting computed area under the curve. With a high density of points, it seems like the linear approximation is most efficient and sufficiently accurate. With very large sequences, it may be necessary to adjust the value passed to the subdivisions argument of integrate(). Strangely, larger values seem to solve problems encountered with large datasets...

FTIR Spectra IntegrationFTIR Spectra Integration


# numerical integration in R
# example based on: http://tolstoy.newcastle.edu.au/R/help/04/10/6138.html

# sample data: FTIR spectra
x <- read.csv(url('http://casoilresource.lawr.ucdavis.edu/drupal/files/fresh_li_material.CSV'), header=FALSE)[100:400,]
names(x) <- c('wavenumber','intensity')

# fit a piece-wise linear function
fx.linear <- approxfun(x$wavenumber, x$intensity)

# integrate this function, over the original limits of x
Fx.linear <- integrate(fx.linear, min(x$wavenumber), max(x$wavenumber))

# fit a smooth spline, and return a function describing it
fx.spline <- splinefun(x$wavenumber, x$intensity)

# integrate this function, over the original limits of x
Fx.spline <- integrate(fx.spline, min(x$wavenumber), max(x$wavenumber))

# visual check, linear and spline fits shifted up for clarity
plot(x, type = "p", las=1, axes=FALSE, cex=0.5, ylim=c(0,0.12))
lines(x$wavenumber, fx.linear(x$wavenumber) + 0.01, col=2)
lines(x$wavenumber, fx.spline(x$wavenumber) + 0.02, col=3)
grid(nx=10, col=grey(0.5))
legend(x=615, y=0.11, legend=c('original','linear','spline'), col=1:3, pch=c(1,NA,NA), lty=c(NA, 1, 1), bg='white')

# results are pretty close
data.frame(method=c('linear', 'spline'), area=c(Fx.linear$value, Fx.spline$value), error=c(Fx.linear$abs.error,Fx.spline$abs.error))

  method     area        error
1 linear 27.71536 0.0005727738
2 spline 27.71527 0.0030796529

splinefun() can also compute derivatives

par(mar=c(0,0,0,0), mfcol=c(2,1))
plot(x, type = "l", lwd=2, axes=FALSE)
grid(nx=10, col=grey(0.5))
plot(x$wavenumber, fx.spline(x$wavenumber, deriv=1), type='l', axes=FALSE)
lines(x$wavenumber, fx.spline(x$wavenumber, deriv=2), col='red')
grid(nx=10, col=grey(0.5))
abline(h=0, lty=2)
legend('topright', legend=c('1st derivative','2nd derivative'), lty=1, col=1:2, bg='white')

Numerical Estimation of DerivativesNumerical Estimation of Derivatives

Plotting XRD (X-Ray Diffraction) Data

Some examples on how to prepare and present data collected from an XRD analysis. The clay fraction from seven horizons was analyzed by XRD, using the five common treatments: potassium saturation (K), potassium saturation heated to 350 Deg C (K 350), potassium saturation heated to 550 Deg C (K 550), magnesium saturation (Mg), and magnesium + glycerin saturation (Mg+GLY). These data files have been attached, and can be found near the bottom of the page.

Plotting the entire data set with lattice graphics:

## load libs

## read the composite data in as a table
## format is 2theta,MG,MG+GLY,K,K350,K550
h1 <- read.csv("tioga1_0-8.csv", header=FALSE)
h2 <- read.csv("tioga1_8-15.csv", header=FALSE)
h3 <- read.csv("tioga1_15-35.csv", header=FALSE)
h4 <- read.csv("tioga1_35-65.csv", header=FALSE)
h5 <- read.csv("tioga1_65-90.csv", header=FALSE)
h6 <- read.csv("tioga1_90-120.csv", header=FALSE)
h7 <- read.csv("tioga1_120-150.csv", header=FALSE)

## load some common d-spacings:
d_spacings <- c(0.33,0.358,0.434,0.482,0.717,1,1.2,1.4,1.8)
d_spacing_labels <- c(".33", ".36", ".43", ".48", ".7","1.0","1.2","1.4","1.8")

## combine horizons, and
xrd <- make.groups(h1, h2, h3, h4, h5, h6, h7)
names(xrd) <- c('x', 'MG', 'MG+GLY', 'K', 'K 350', 'K 550', 'horizon')

## convert data into long format
xrd.long <- melt(data=xrd, id.var=c('x', 'horizon'), measure.var=c('K','K 350', 'K 550', 'MG', 'MG+GLY'), variable_name='treatment')

## set a better ordering of the treatments
xrd.long$treatment <- ordered(xrd.long$treatment, c('MG', 'MG+GLY', 'K', 'K 350', 'K 550'))

## change the strip background colors
##  trellis.par.set(list(strip.background = list(col = grey(c(0.9,0.8)) )))

## plot the data along with some common d-spacings:
xyplot(value ~ x | treatment + horizon , data=xrd.long, as.table=TRUE, ylim=c(0,500), xlab='Deg 2Theta', ylab='Counts', panel=function(x, y, ...) {panel.abline(v=(asin(0.154/(2*d_spacings)) * 180/pi * 2), col=grey(0.9)) ; panel.xyplot(x, y, ..., type='l', col='black')} )

## another approach: labels on the side
xyplot(value ~ x | horizon + treatment , data=xrd.long, as.table=TRUE, ylim=c(0,500), xlab='Deg 2Theta', ylab='Counts', panel=function(x, y, ...) {panel.abline(v=(asin(0.154/(2*d_spacings)) * 180/pi * 2), col=grey(0.9)) ; panel.xyplot(x, y, ..., type='l', col='black')}, strip.left=TRUE, strip=FALSE)

Example XRD plot with lattice graphics: 7 horizons and 5 treatmentsExample XRD plot with lattice graphics: 7 horizons and 5 treatments

Find peaks in an XRD dataset

Locating relevant peaks in an X-ray diffractogram is an important step in identifying phyllosilicate minerals in soils. An automated approach to finding peaks in any dataset was presented by Martin Maechler, contributed to the R-Help mailing list Nov 25, 2005. paste these functions into an R session to use them

peaks <- function(series, span = 3, do.pad = TRUE) {
    if((span <- as.integer(span)) %% 2 != 1) stop("'span' must be odd")
    s1 <- 1:1 + (s <- span %/% 2)
    if(span == 1) return(rep.int(TRUE, length(series)))
    z <- embed(series, span)
    v <- apply(z[,s1] > z[, -s1, drop=FALSE], 1, all)
    if(do.pad) {
        pad <- rep.int(FALSE, s)
        c(pad, v, pad)
    } else v

peaksign <- function(series, span = 3, do.pad = TRUE)
    ## Purpose: return (-1 / 0 / 1) if series[i] is ( trough / "normal" / peak )
    ## ----------------------------------------------------------------------
    ## Author: Martin Maechler, Date: 25 Nov 2005

    if((span <- as.integer(span)) %% 2 != 1 || span == 1)
        stop("'span' must be odd and >= 3")
    s1 <- 1:1 + (s <- span %/% 2)
    z <- embed(series, span)
    d <- z[,s1] - z[, -s1, drop=FALSE]
    ans <- rep.int(0:0, nrow(d))
    ans[apply(d > 0, 1, all)] <- as.integer(1)
    ans[apply(d < 0, 1, all)] <- as.integer(-1)
    if(do.pad) {
        pad <- rep.int(0:0, s)
        c(pad, ans, pad)
    } else ans

check.pks <- function(y, span = 3)
    stopifnot(identical(peaks( y, span), peaksign(y, span) ==  1),
              identical(peaks(-y, span), peaksign(y, span) == -1))

for(y in list(1:10, rep(1,10), c(11,2,2,3,4,4,6,6,6))) {
    for(sp in c(3,5,7))
        check.pks(y, span = sp)
    stopifnot(peaksign(y) == 0)

Commands to find and plot the peaks, based on suggestions by peaks() function author.

## load some sample data
d <- read.csv("tioga1_35-65.csv", header=FALSE)

## name the columns
names(d) <- c('x', 'MG', 'MG+GLY', 'K', 'K 350', 'K 550')

## locate peaks in the 'K' signal
## the second argument is the "sensitivity" of the peak finding algorithm
d.peaks <- peaks(d$K, 35)

## save a vector of the positions in the K signal where the peaks were identified
peak_idx <- which(d.peaks)

## simple plot of raw K signal
plot(K ~ x, data=d, type="l", cex=.25, main="Tioga1 35-65cm\nK Treatment", xlab="Deg. 2 Theta", ylab="Intensity", ylim=c(0,max(d$K) + 50))

## add peaks: note that we are sub0setting the original data by the peak location index
points(K ~ x, data=d[peak_idx, ], col = 2, cex = 1.5)

## compute peak d-spacings
peak_d_spacings <- signif( (0.154/ (sin(d$x[peak_idx] * pi/180))), 3)

## annotate d-spacings / or peak index
## text(d$x[peak_idx], d$K[peak_idx] + 20, peak_d_spacings, col='blue', cex=0.75 )
text(d$x[peak_idx], d$K[peak_idx] + 20, 1:length(peak_d_spacings), col='blue', cex=0.75 )

## print a simple table of d-spacings by index
1            1.370
2            1.040
3            0.751
4            0.638
5            0.563
6            0.488
7            0.452
8            0.435
9            0.367
10           0.347

Automatic location of peaks in an XRD dataset with R
Peaks found with two different tolerance settings.

Some ideas on annotating common d-spacings

## note that we need to define this function before we can use it
## put all of the plotting commands into a wrapper function:
plot_xrd <- function(d)
        ## plot the difractograms offset by 200
        plot(MG ~ x, data=d, type="l", cex=.25, main="Tioga1 35-65cm", xlab="2 Theta", ylab="Intensity", xlim=c(2,32), ylim=c(0,1600), xaxt='none')
        ## add the other treatments
        lines(MG_GLY + 200 ~ x, data=d)
        lines(K + 400 ~ x, data=d)
        lines(K_350 + 600 ~ x, data=d)
        lines(K_550 + 800 ~ x, data=d)

        ## label the lines
        text(31.5, c(30,230,430,630,830), c("MG","MG+GLY","K25","K350","K550"))
        ## plot the zero-line on each graph:
        abline(h=c(0,200,400,600,800), lty=2, col="gray")
        ## plot the established boundaries to these common spacings
        abline(v=(asin(0.154/(2*c(0.71,0.73,0.72,0.75,0.99,1.01,1.24,1.28,1.4,1.5,1.77,1.8))) * 180/pi * 2), col="green", lty=2)
        ## plot some common d-spacings
        # abline(v=(asin(0.154/(2*c(.715,.73,1,1.2,1.4,1.8))) * 180/pi * 2), col="blue")
        ## annotate the lines, recall that d-spacing is in reverse order with respect to Two_theta
        text((asin(0.154/(2*c(.715,.73,1,1.2,1.4,1.8))) * 180/pi * 2), c(1300,1400,1400,1400,1400,1400), c(".715",".73","1.0","1.2","1.4","1.8"), col=1, cex=1)
        ## add the axis
        axis(1, 1:30)

## load some sample data
d <- read.csv("tioga1_35-65.csv", header=FALSE)

## name the columns
names(d) <- c('x', 'MG', 'MG_GLY', 'K', 'K_350', 'K_550')

## run the wrapper function to plot the Tioga1 0-8cm data:

Example XRD plot 2: illustrating common d-spacingsExample XRD plot 2: illustrating common d-spacings

Two-page display of XRD data for an entire soil profile

Multi-horizon XRD sample plot 1
Sample 1
Multi-horizon XRD sample plot 2
Sample 2

Summarizing Grouped Data in R

A colleague of mine recently asked about computing basic summary statistics from grouped data in R. These are a couple examples that I suggested. Additional documentation for the plyr package can be found here.

Code Snippets

# load libraries
library(lattice) # nice looking plots
library(plyr) # advanced aggregation functions

# generate 100 random obs
x <- rnorm(100)

# generate treatment labels
treatment <- rep(letters[1:5], each=4)

# generate depth labels
depth <- rep(c('0-10', '10-20'), 50)

# combine into a single dataframe
d <- data.frame(x, treatment, depth)

# check out the dataframe:

# visually check data with box-whisker plot
bwplot(x ~ treatment:depth, data=d, scales=list(y=list(tick.number=10, cex=0.75), x=list(rot=45, cex=0.75)), ylab='Measured Variable', xlab='Treatment / Group')

# calculate median by treatment and depth
aggregate(d$x, by=list(d$treatment, d$depth), median)

# Group.1 is the treatment
# Group.2 is the depth
# x is the median
   Group.1 Group.2            x
1        a    0-10  0.382152173
2        b    0-10  0.347044867
3        c    0-10  0.384062345
4        d    0-10  0.499198983
5        e    0-10 -0.191705870
6        a   10-20 -0.005618922
7        b   10-20  0.066331780
8        c   10-20  0.328471014
9        d   10-20 -0.049369325
10       e   10-20 -0.097184000

# another approach using ddply()
# compute a summary by treatment X depth
# returning the result as a nice data frame
ddply(d, .(treatment, depth), function(i) summary(i$x))

# result looks like this:
   treatment depth    Min.  1st Qu.    Median      Mean 3rd Qu.   Max.
1          a  0-10 -0.8356 -0.46760  0.382200  0.376500  0.8635 2.4020
2          b  0-10 -1.8050 -0.55550  0.347000  0.006561  0.5673 1.0630
3          c  0-10 -0.5425 -0.04595  0.384100  0.371000  0.5507 1.5120
4          d  0-10 -1.3770 -0.38070  0.499200  0.339300  1.1520 1.5870
5          e  0-10 -1.2770 -0.43100 -0.191700 -0.115700  0.4459 1.1000
6          a 10-20 -1.9890 -0.22380 -0.005619 -0.079500  0.4634 1.5950
7          b 10-20 -1.4710 -0.60670  0.066330  0.013510  0.6370 1.4660
8          c 10-20 -0.7099 -0.25470  0.328500  0.360600  0.7653 2.1730
9          d 10-20 -2.2150 -0.80430 -0.049370 -0.126100  0.4917 1.9800
10         e 10-20 -1.0440 -0.54830 -0.097180 -0.057270  0.4457 0.9438

Using lm() and predict() to apply a standard curve to Analytical Data

R: Multi-figure plot of Carlo-Erba DataR: Multi-figure plot of Carlo-Erba Data

Load input data (see attached files at bottom of this page)

#first the sample data
#note that field sep might be different based on pre-formatting
cn <- read.table("deb_pinn_C_N-raw.final.txt", sep=" ", header=TRUE)

#then the standards:
cn_std <- read.table("deb_pinn_C_N-standards.final.txt", sep="\t", header=TRUE)

# comput simple linear models from standards
# "mg_nitrogen as modeled by area under curve"
lm.N <- lm(mg_N ~ area_N, data=cn_std)
lm.C <- lm(mg_C ~ area_C, data=cn_std)

# check std curve stats:
 Multiple R-Squared: 0.9999,     Adjusted R-squared: 0.9999
 Multiple R-Squared:     1,      Adjusted R-squared:     1

Apply the standard curve to the raw measurements

# note that the predict method is looking for column names that where originally
# used in the creation of the lm object
# i.e. area_N for lm.N  and area_C for lm.C
# therefore it is possible to pass the original data matrix with both
# values to predict(), while specifiying the lm object
cn$mg_N <- predict(lm.N, cn)
cn$mg_C <- predict(lm.C, cn)

Merge sample mass to calculate percent C/N by mass

#read in the initial mass data, note that by default string data will be read in as a factor
# i.e. factors are like treatments, and this data type will not work in some functions
cn.mass <- read.table("all_samples.masses.txt", header=TRUE, sep="\t")

#take a look at how the mass data was read in by read.table()
'data.frame':   75 obs. of  5 variables:
 $ id         : <b>Factor</b> w/ 26 levels</b> "004K","007K",..: 15 16 17 18 19 20 21 22 23 24 ...
 $ pedon_id   : <b>Factor</b> w/ 18 levels "004K","007K",..: 15 15 15 18 18 18 17 17 17 16 ...
 $ horizon_num: int  2 5 7 2 4 6 2 4 5 2 ...
 $ sample_id  : <b>Factor</b> w/ 75 levels "A1","A10","A11",..: 23 24 14 15 16 25 29 30 31 32 ...
 $ sample_mg  : num  24.6 27.5 33.3 25.9 25.8 ...

# use the merge() function to join the two dataframes based on the cell_id column
#merge() does not work with columns of type "level"
# convert them to characters in upper case, and append them to the original dataframe:
# note that merge is case sensitive !!!
cn$cell_id <- toupper(as.character(cn$sample_id))
cn.mass$cell_id <- toupper(as.character(cn.mass$sample_id))

#only keep our pedon data, leave behind the checks
cn.complete <- merge(x=cn, y=cn.mass, by.x="cell_id", by.y="cell_id", sort=FALSE, all.y=TRUE)

##calculate percent N and C, appending to the cn.complete dataframe
cn.complete$pct_N <- (cn.complete$mg_N / cn.complete$sample_mg) * 100
cn.complete$pct_C <- (cn.complete$mg_C / cn.complete$sample_mg) * 100

#look at the results:
'data.frame':   75 obs. of  13 variables:
 $ cell_id    : chr  "B8" "B9" "B10" "B11" ...
 $ sample_id.x: Factor w/ 81 levels "A1","A10","A11",..: 24 25 15 16 17 26 30 31 32 33 ...
 $ area_N     : num  2225431  208028  341264 1377688  168328 ...
 $ area_C     : num  85307240  8296664 14624760 50879560  6690868 ...
 $ mg_N       : num  0.09261 0.01096 0.01635 0.05830 0.00935 ...
 $ mg_C       : num  1.2609 0.1204 0.2141 0.7510 0.0967 ...
 $ id         : Factor w/ 26 levels "004K","007K",..: 15 16 17 18 19 20 21 22 23 24 ...
 $ pedon_id   : Factor w/ 18 levels "004K","007K",..: 15 15 15 18 18 18 17 17 17 16 ...
 $ horizon_num: int  2 5 7 2 4 6 2 4 5 2 ...
 $ sample_id.y: Factor w/ 75 levels "A1","A10","A11",..: 23 24 14 15 16 25 29 30 31 32 ...
 $ sample_mg  : num  24.6 27.5 33.3 25.9 25.8 ...
 $ pct_N      : num  0.3759 0.0398 0.0491 0.2254 0.0363 ...
 $ pct_C      : num  5.117 0.438 0.643 2.903 0.375 ...

#save the data for further processing:
write.table(cn.complete, file="cn.complete.table", col.names=TRUE, row.names=FALSE)

Measure the accuracy of the sensor in the machine with simple correlation

### get a measure of how accurate the sensor was, based on our checks:
#just the first 5 columns, in case there is extra
cn.checks <- cn[c(13,26,39,52,65,78),][1:5]

#make a list of the mg of ACE in each check
checks.mg_ACE <- c(0.798, 1.588, 1.288, 1.574, 1.338, 1.191)

#make a column of the REAL mg_N based on the percent N in ACE
cn.checks$real_mg_N  <- checks.mg_ACE * 0.104

#make a column of the REAL mg_C based on the percent C in ACE
cn.checks$real_mg_C  <- checks.mg_ACE * 0.711

# check with cor()

Create a mutli-figure diagnostic plot

layout(mat=matrix(c(1, 4, 2, 3), nc = 2, nr = 2), width=c(1,1), height=c(1,2))
#first the std curves
par(mar = c(4,4,2,2))

plot(mg_N ~ area_N, data=cn_std, xlab="Area Counts", ylab="mg", main="Std Curve for N", cex=0.7, pch=16, cex.axis=0.6)
rug(cn$area_N, ticksize=0.02, col="gray")
rug(cn$mg_N, ticksize=0.02, col="gray", side=2)
abline(lm.N, col="gray", lty=2)
points(cn$area_N, cn$mg_N, col="blue", cex=0.2, pch=16)

plot(mg_C ~ area_C, data=cn_std, xlab="Area Counts", ylab="mg", main="Std Curve for C", cex=0.7, pch=16, cex.axis=0.6)
rug(cn$area_C, ticksize=0.02, col="gray")
rug(cn$mg_C, ticksize=0.02, col="gray", side=2)
abline(lm.C, col="gray", lty=2)
points(cn$area_C, cn$mg_C, col="blue", cex=0.2, pch=16)
#possible problems
points(cn$area_C[which(cn$area_C > 1.0e+08)], cn$mg_C[which(cn$area_C > 1.0e+08)] , col="red")

#sample plot of carbon distributions within each pedon:
#note that las=2 makes axis labels perpendicular to axis
par(mar = c(7,4,4,2))

#boxplot illustrating the within-pedon variation of Carbon
boxplot(cn.complete$pct_C ~ cn.complete$pedon_id , cex.axis=0.6, boxwex=0.2, las=2, main="Percent Total Carbon", ylab="% C", xlab="Pedon ID", cex=0.4)

#boxplot illustrating the within-pedon variation of Nitrogen
boxplot(cn.complete$pct_N ~ cn.complete$pedon_id , cex.axis=0.6, boxwex=0.2, las=2, main="Percent Total Nitrogen", ylab="% N", xlab="Pedon ID", cex=0.4)

Visualization of Grouped Data with Lattice Graphics (in R)

The lattice package for R provides several convenient functions for plotting data which has some kind of internal structure, usually in the form of groups. Lattice plotting functions create common visualizations of data (scatter plots, box-and-whisker plots, etc.), within in grid of panels defined by one or more grouping variables. See the manual page for xyplot for documentation and examples. The author of the lattice package has posted some excellent examples with code snippets from the upcoming definitive book on the topic (Lattice: Multivariate Data Visualization with R, by Deepayan Sarkar).

Complex Plots with Lattice
With the convenience of lattice graphics comes a price- complex plots cannot be generated element by element (as is the case using base graphics). Instead, one of several panel functions must be used or a customized panel function must be written. There is extensive documentation on this, but not nearly enough for the special case of wanting a graph which includes both lines and point symbols. In addition, lattice functions require that all data to be plotted occur in the same dataframe.This example presents one possible solution to plotting grouped data (via lattice) which consists of both different symbolization (lines and points) and source dataframes.

The following example was created to illustrate the changes in the shape of the logistic function that occur with 3 possible 'slope' terms (b), and 3 possible 'intercept' terms (a). Each combination of slope and intercept are used as grouping variables, such that the resulting figure will contain 9 panels- one for each combination of slope/intercept. The panels are labeled with the respective slope (green panel title) and intercept (yellow panel title). To demonstrate plotting of mixed symbol types, an unrelated set of binary data was generated and split into the same 9 groupings. See R code below for the full story.

Lattice Plot Example: Panels illustrate the effect of different slope and intercept terms for the logistic function.Lattice Plot Example: Panels illustrate the effect of different slope and intercept terms for the logistic function.

Generate 9 Possible Versions of the Logistic Function

## load required libs

## the logistic function
## a is the intercept, b the slope, and x the value
f <- function(a, b, x) {p <- (exp(a + b*x)) / (1 + exp(a + b*x))}

## a data vector
x.seq <- seq(-5, 5, by=0.5)
x <- rep(x.seq, 9)

## generate some
## slope and intercept possibilities
a.seq <- c(-2, 0, 2)
b.seq <- c(-1, -0.5, -0.25)

## create data  frame of all possible combinations
## i.e. all slope/intercept combinations
d <- expand.grid(a=a.seq, b=b.seq, x=x.seq, KEEP.OUT.ATTRS=FALSE)

## add the probability values back to the main DF
d$p <- f(d$a,d$b,d$x)

Generate Some Fake Binary Data

## generate some data on the predictor scale:
rx <- runif(min=-5, max=5, n=20)

## using the same groupings from the above example
## combine into a DF
rd <- expand.grid(a=a.seq, b=b.seq, x=rx, KEEP.OUT.ATTRS=FALSE)

## add pretend binary data
rd$rp <- rbinom(n=length(rd$x), 1, 0.5)

Merge the Two Dataframes

## add a dummy column to the original dataframe for the binary data
d$rp <- NA

## add dummy col for real probability
rd$p <- NA

## combine the first set of data with the pretend probabilities
dd <- make.groups(d, rd)

Plot Lines and Points with xyplot

xyplot(p + rp ~ x | factor(a) + factor(b),
xlab='Predictor Variable',
col=c(1,2), lwd=c(2,1), type=c('l','p'), pch=c(NA, '|')

The trick to plotting multiple symbol types can be summarized with:

  1. include all response variables to be plotted in the left-hand side of the plotting formula : p + rp ~ .... Note that they must share a common predictor variable, in this case the column "x" was used.
  2. use the panel function panel=panel.superpose and its argument distribute.type=TRUE to allow for more than one plotting style
  3. specify plotting styles (line style, symbol type, line width, color, etc.) as a vector which contains as many elements as response variables from the plotting formula: col=c(1,2), lwd=c(2,1), type=c('l','p'), pch=c(NA, '|')

With this approach in mind, it is possible to generate complicated plots using lattice graphics when data is of multiple type (line vs. point) and comes from multiple source dataframes. A common example of this scenario might involve plotting the continuous predictions from a linear model and the original points used to create the model.

Lattice Plot Example 2: Data from a logistic regression model, including fitted response, standard error, and original data points.Lattice Plot Example 2: Data from a logistic regression model, including fitted response, standard error, and original data points.

Working with Spatial Data

A collection of notes, examples, references, and thoughts on working with spatial data.

Converting Alpha-Shapes into SP Objects

Just read about a new R package called alphahull (paper) that sounds like it might be a good candidate for addressing this request regarding concave hulls. Below are some notes on computing alpha-shapes and alpha-hulls from spatial data and converting the results returned by ashape() and ahull() into SP-class objects. Note that the functions are attached at the bottom of the page. Be sure to read the license for the alphahull package if you plan to use it in your work.

Alpha-Shape ExampleAlpha-Shape Example

## not widely tested!

# need these


# read point vector in from GRASS
x <- readVECT6('rtk_pts_5_1')

# extract coordinates
x.coords <- coordinates(x)

# alpha-shape: 100 meter threshold
x.as <- ashape(x.coords[,1], x.coords[,2], alpha=100)

# alpha-hull: 30 meter threshold
x.ah <- ahull(x.coords[,1], x.coords[,2], alpha=30)

plot(x.as, cex=0.5, pch=4, xlab='Easting (m)', ylab='Northing (m)', main=expression(paste('100m ', alpha, '-Shape')), asp=1)

plot(x.ah, cex=0.5, pch=4, xlab='Easting (m)', ylab='Northing (m)', main=expression(paste('30m ', alpha, '-Hull')), asp=1)

## convert into SP objects

# alpha-shape
x.as.spldf <- ashape_to_SPLDF(x.as, proj4string=x@proj4string)

# alpha-hull
x.ah.spldf <- ahull_to_SPLDF(x.ah, proj4string=x@proj4string)

# check: OK
pdf(file='ashape_ahull_demo.pdf', width=6, height=6)
lines(x.ah.spldf, col='red')
points(x, cex=0.5, pch=4, col='blue')
legend('bottomright', legend=c(expression(paste('100m ', alpha, '-Shape')), expression(paste('30m ', alpha, '-Hull')), 'Observation'), lty=c(1,1,NA), pch=c(NA,NA,4), col=c('black', 'red', 'blue'), bty='n')

# save back to GRASS: OK
writeVECT6(x.as.spldf, 'rtk_ashape')

# save back to GRASS: OK
writeVECT6(x.ah.spldf, 'rtk_ahull')

Customizing Maps in R: spplot() and latticeExtra functions

I recently noticed the new latticeExtra page on R-forge, which contains many very interesting demos of new lattice-related functionality. There are strong opinions about the "best" graphics system in R (base graphics, grid graphics, lattice, ggplot, etc.)-- I tend to use base graphics for simple figures and lattice for depicting multivariate or structured data. The sp package defines classes for storing spatial data in R, and contains several useful plotting methods such as the lattice-based spplot(). This function, and back-end helper functions, provide a generalized framework for plotting many kinds of spatial data. However, sometimes with great abstraction comes great ambiguity-- many of the arguments that would otherwise allow fine tuning of the figure are buried in documentation for lattice functions. Examples are more fun than links to documentation, so I put together a couple of them below. They describe several strategies for placing and adjusting map legends-- either automatically, or manually added with the update() function. The last example demonstrates an approach for over-plotting 2 rasters. All of the examples are based on the meuse data set, from the gstat package.

Extended spplot() examplesExtended spplot() examples


# setup environment

# load example data

# convert to SpatialPointsDataFrame
coordinates(meuse.grid) <- ~ x + y
coordinates(meuse) <- ~ x + y
coordinates(meuse.alt) <- ~ x + y

# converto SpatialPixelsDataFram
gridded(meuse.grid) <- TRUE

# convert 'soil' to factor and re-label
meuse.grid$soil <- factor(meuse.grid$soil, labels=c('A','B','C'))
meuse$soil <- factor(meuse$soil, levels=c('1','2','3'), labels=c('A','B','C'))

# example 1

# setup color scheme
cols <- brewer.pal(n=3, 'Set1')
p.pch <- c(2,3,4)

# generate list of trellis settings
tps <- list(regions=list(col=cols), superpose.polygon=list(col=cols), superpose.symbol=list(col='black', pch=p.pch))

# init list of overlays
spl <- list('sp.points', meuse, cex=0.75, pch=p.pch[meuse$soil], col='black')

# setup trellis options

# initial plot, missing key
p1 <- spplot(meuse.grid, 'soil', sp.layout=spl, colorkey=FALSE, col.regions=cols, cuts=length(cols)-1)

# add a key at the top + space for key
p1 <- update(p1, key=simpleKey(levels(meuse.grid$soil), points=FALSE, lines=FALSE, rect=TRUE, regions=TRUE, columns=3, title='Class', cex=0.75))

# add a key on the right + space for key
p1 <- update(p1, key=simpleKey(levels(meuse$soil), points=TRUE, columns=1, title='Class', cex=0.75, space='right', ))


# example 2

# generate list of trellis settings
tps <- list(regions=custom.theme()$regions, superpose.symbol=list(col='black', pch=p.pch), fontsize=list(text=16))

p2 <- spplot(meuse.grid, 'dist', sp.layout=spl, colorkey=list(space='bottom', title='Distance'), scales=list(draw=TRUE, cex=0.5))

p2 <- update(p2, key=simpleKey(levels(meuse$soil), points=TRUE, columns=1, space='right'))


# example 3
# more colorkey tweaking and...
# overlay 2 grids with layer()

sp.grid <- function (obj, col = 1, alpha = 1, ...)
    if (is.character(obj))
        obj = get(obj)
    xy = coordinates(obj)
    if (length(col) != 1 && length(col) != nrow(xy)) {
    gt = as(getGridTopology(obj), "data.frame")
    grid.rect(x = xy[, 1], y = xy[, 2], width = gt$cellsize[1],
        height = gt$cellsize[2], default.units = "native", gp = gpar(fill = col, col = NA, alpha = alpha))

trellis.par.set(regions=custom.theme()$regions, superpose.polygon=list(col='black', alpha=0.25))

# first grid covers entire extent
p3 <- spplot(meuse.grid, 'dist', colorkey=list(space='bottom', width=1, height=0.5, tick.number=3))

# overlay partially transparent, kind of a hack...
p3 <- p3 + layer(sp.grid(meuse.grid[meuse.grid$soil == 'A', ], col='black', alpha=0.25))

p3 <- update(p3, key=simpleKey('Shaded Region', points=FALSE, lines=FALSE, rect=TRUE, columns=1, space='top'))


# example 4: merge all three together

# order matters
p4 <- c(p3,p2,p1, layout=c(3,1))
p4 <- update(p4, key=simpleKey(levels(meuse$soil), points=TRUE, columns=1, space='right'))


# save to file: note that we have to reset the 'regions' colors
png(file='spplot_examples.png', width=700, height=350)

Generation of Sample Site Locations [sp package for R]

Setting up sampling designs is a non-trivial aspect to any field experiment that includes a spatial component. The sp package for R provides a simple framework for generating point sampling schemes based on region-defining features (lines or polygons) and a sampling type (regular spacing, non-aligned, random, random-stratified, hexagonal grid, etc.). The rgdal package provides a set of functions for importing/exporting common vector data formats. This example demonstrates simple import/export, iterating over sp objects, and reconstituting new objects from lists of objects. A more complex sampling scheme is demonstrated here.

  1. Setup the environment, load some sample polygons, and tryout the spsample() function.
    # load required packages

    # read data:
    # note the funky syntax
    # note that this should have a REAL projection defined
    # an incorrect definition may result in an error from readOGR
    x <- readOGR('polys/polys.shp', layer='polys')

    # spsample will not sample each polygon, rather it works on the union of polys
    # try it:
    plot(x) ; points(spsample(x, n=100, type='random'), col='red', pch=3, cex=0.5)
  2. Sampling with spsample example 1Sampling with spsample example 1

  3. Iterate through all polygons in our original dataset, generating approximately 100 sample points within each polygon. Note that we use sapply() it iterate through the list of polygons, and do.call('rbind', ...) to 'stack' the list elements back into a single SpatialPoints object.
    # hexagonal grid from lower-left corner
    s <- sapply(slot(x, 'polygons'), function(i) spsample(i, n=100, type='hexagonal', offset=c(0,0)))

    # we now have a list of SpatialPoints objects, one entry per polygon of original data
    plot(x) ; points(s[[4]], col='red', pch=3, cex=.5)

    # stack into a single SpatialPoints object
    s.merged <- do.call('rbind', s)
  4. Sampling with spsample example 2Sampling with spsample example 2

  5. Now that the sample points for each polygon have been merged into a single SpatialPoints object, we need to attach a dataframe with the ID associating each sample point with its parent polygon. Attaching this data will "promote" the SpatialPoints object to a SpatialPointsDataFrame object.
    # add an id, based on source polygon:
    # extract the original IDs
    ids <- sapply(slot(x, 'polygons'), function(i) slot(i, 'ID'))

    # determine the number of ACTUAL sample points generated for each poly
    npts <- sapply(s, function(i) nrow(i@coords))

    # generate a reconstituted vector of point IDs
    pt_id <- rep(ids, npts)

    # promote to SpatialPointsDataFrame
    s.final <- SpatialPointsDataFrame(s.merged, data=data.frame(poly_id=pt_id))

    # check:
    plot(x) ; points(s.final, col=s.final$poly_id, pch=3, cex=0.5)
  6. Sampling with spsample example 3Sampling with spsample example 3

  7. Copy over the spatial reference system data from the polygons object, and save sample points to a new shapefile. Note that you can only write to a shapefile if the object in question is a SpatialPointsDataFrame object.
    # copy source data spatial reference system to new object
    s.final@proj4string <- x@proj4string

    # write out to new file
    writeOGR(s.final, dsn='polys/', layer='rnd_pts', driver='ESRI Shapefile')

Ordinary Kriging Example: GRASS-R Bindings

Update: 2012-02-13
Many of the examples used in this demonstration are now somewhat dated, probably inefficient, and in need of revision. I'll spend some time on an updated version for the GRASS wiki soon.

A simple example of how to use GRASS+R to perform interpolation with ordinary kriging, using data from the spearfish sample dataset. This example makes use of the gstat library for R.

Helpful References:

  • Issaks, E.H. & Srivastava, R.M. An Introduction to Applied Geostatistics Oxford University Press, 1989
  • GSTAT Manual
  • GSTAT Examples

Elevation Data and Sample Points: 300 randomly placed points where elevation data was sampled.Elevation Data and Sample Points: 300 randomly placed points where elevation data was sampled.

Data Prep:
As a contrived example, we will generate 300 random points within the current region, and sample an elevation raster at each of these points.

# set region:
g.region rast=elevation.dem

# extract some random points from an elevation dataset
v.random out=rs n=300

# create attribute table:
v.db.addtable map=rs columns="elev double"

# extract raster data at points
v.what.rast vect=rs rast=elevation.dem column=elev

# simple display:
d.rast elevation.dem
d.vect rs size=4

# start R

Load GRASS Data into R:
Remember that you will need to install these R packages onto your computer.

##load libraries

## read in vector dataset from above
G <- gmeta6()
x.has.na <- readVECT6('rs')

# remove records with NA
x <- x.has.na[-which(is.na(x.has.na@data$elev)),]

## create a grid wihch will define the interpolation
## note that it is based on the current GRASS region settings
grd <- gmeta2grd()

## make a new grid of (1)s
## be sure to use original data's proj data...
## doesn't work with the stuff stored in G...
new_data <- SpatialGridDataFrame(grd, data=data.frame(k=rep(1,G$cols*G$rows)), proj4string=CRS(x@proj4string@projargs))

## optionally we can use another raster of 1's as our interpolation mask
mask <- as(readRAST6("rushmore"), "SpatialPixelsDataFrame")
## need to manually set the coordinate system information:
mask@proj4string <- x@proj4string
## this new object could then be used in the 'newdata' argument to predict(), i.e.
## x.pred_OK <- predict(g, id='elev', newdata=mask)

Variogram Modeling:
A very simple example, using default parameters for a non-directional variogram is presented below. Modeling the variogram for an actual spatial problem requires knowlege of both your dataset (distribution, collection methods, etc.), the natural processes involved (stationarity vs. anisotropy ?), and a bit about the assumptions of geostatistics.

## init our gstat object, with the model formula
## note that coordinates are auto-identified from the GRASS object
g <- gstat(id="elev", formula=elev ~ 1, data=x)

## view a variogram with specified parameters
plot(variogram(g, width=250, cutoff=10000, map=FALSE))

## optionally make a variogram map, and plot semi-variance for 10-point pairs or greater:
plot(variogram(g, width=250, cutoff=10000, map=TRUE), threshold=10)

## fit a linear variogram model- as this looks appropriate
## ... using default parameters
v.fit <- fit.variogram(variogram(g) ,vgm(model="Lin") )
plot(variogram(g, width=250, cutoff=10000, map=FALSE), model=v.fit)

## update gstat object with fitted variogram model
g <- gstat(g, id="elev", model=v.fit )

Variogram and Fitted Model: A Linear variogram model was fitted to the elevation data.Variogram and Fit Model: A Linear variogram model was fit to the elevation data.

Interpolation by Ordinary Kriging:
The prediction is done for every instance of a '1' in the object passed to the newdata= argument.

## interpolate with ord. kriging
x.pred_OK <- predict(g, id='elev', newdata=new_data)

Send Results Back to GRASS:

## write raster back to GRASS: interpolation and kriging variance:
## system('g.remove rast=elev.ok')
writeRAST6(x.pred_OK, 'elev.ok', zcol='elev.pred')
writeRAST6(x.pred_OK, 'elev.ok_var', zcol='elev.var')

## quit:

Viewing Results in GRASS:

# reset colors to match original data:
r.colors map=elev.ok rast=elevation.dem

# give the kriging variance a grey color map
r.colors map=elev.ok_var color=rules <<EOF
0% white
100% black

# display the kriged interpolation, with intensity | saturation based on variance
d.his s=elev.ok_var h=elev.ok
# optional method:
# d.his i=elev.ok_var h=elev.ok
d.vect rs size=4

Interpolated Elevation Data via Ordinary Kriging: Hue is interpolated elevation value, saturation is based on the kriging variance.Interpolated Elevation Data via Ordinary Kriging: Hue is interpolated elevation value, saturation is based on the kriging variance.

Simple Comparison with RST:
RST (regularized splines with tension) and OK (ordinary kriging) are two common interpolation methods. Computing the RMSE (root-mean-square-error) between the interpolated raster and the original raster provides a simple quantitative measure of how well the interpolation performed, at least in terms mean magnitude of error. A spatial description of interpolation error can be generated by subtracting the new raster from the original. Note that the steps involve cell-wise computation of the square-error (SE), region-wise computation of the mean-square-error (MSE); the square root of MSE gives the root-mean-square-error or RMSE.

# compare with RST - tension of 60, same points
v.surf.rst in=rs elev=elev.rst zcol=elev tension=60
r.colors map=elev.rst rast=elevation.dem

# compute SE between kriged map and original
r.mapcalc "d = (elev.ok - elevation.dem)^2 "
r.colors map=d color=rainbow
d.rast d
d.vect rs size=4

# compute SE between RST map and original
r.mapcalc "d2 = (elev.rst - elevation.dem)^2"
r.colors map=d2 color=rainbow
d.rast d2
d.vect rs size=4

# compare results:

# compute RMSE for OK [sqrt(MSE)]
r.univar d

# compute RMSE for RST [sqrt(MSE)]
r.univar d2
# see table below:

Root-mean-square-error Comparison:
Looks like the RSME are pretty close...

Method OK RST
RMSE 61 meters 64 meters

Ordinary Kriging Example: R via text file

A simple example of how to use R to perform interpolation with ordinary kriging, using data from a text file. This example makes use of the gstat library for R. Additional examples of how to use the following gstat functions are included:

  • variogram maps
  • directional variogram plots
  • ploting the interpolated surface directly from R

Note that this example is not meant to be an authoritative guide on variogram selection, or proper modeling of anisotropy-- just an example. The Kansas Geological Survey has an interesting set of reports that illustrate selection of a directional variogram in the presence of a strong, regional trend.

Elevation Data and Sample Points: 300 randomly placed points where elevation data was sampled.Original elevation data and sample points: 300 randomly placed points where elevation data was sampled.

Data Prep:
Export the coordinates and elevation values from the previous example. See attached file elev.txt.

# two new columns to the random point vector from the previous example
v.db.addcol map=rs columns="x double, y double"
# upload coordinates
v.to.db option=coor column=x,y map=rs
# export to text file
db.select rs fs="," > elev.csv

Start R:
Load in the text file, and coerce to format that gstat can use.

## load some libraries first:
## load data
d <- read.csv('elev.csv')

## gstat does not like missing data, subset original data:
e <- na.omit(d)

## convert simple data frame into a spatial data frame object:
coordinates(e) <- ~ x+y

## test result with simple bubble plot:
bubble(e, zcol='elev', fill=FALSE, do.sqrt=FALSE, maxsize=2)

## create a grid onto which we will interpolate:
## first get the range in data
x.range <- as.integer(range(e@coords[,1]))
y.range <- as.integer(range(e@coords[,2]))

## now expand to a grid with 500 meter spacing:
grd <- expand.grid(x=seq(from=x.range[1], to=x.range[2], by=500), y=seq(from=y.range[1], to=y.range[2], by=500) )

## convert to SpatialPixel class
coordinates(grd) <- ~ x+y
gridded(grd) <- TRUE

## test it out:
plot(grd, cex=0.5)
points(e, pch=1, col='red', cex=0.7)
title("Interpolation Grid and Sample Points")

Interpolation GridInterpolation Grid

Create GSTAT Objects:
Make some diagnostic plots, model variogram, check for anisotropy, etc.

## make gstat object:
g <- gstat(id="elev", formula=elev ~ 1, data=e)

## the original data had a large north-south trend, check with a variogram map
plot(variogram(g, map=TRUE, cutoff=4000, width=200), threshold=10)

## another approach:
# create directional variograms at 0, 45, 90, 135 degrees from north (y-axis)
v <- variogram(g, alpha=c(0,45,90,135))

## 0 and 45 deg. look good. lets fit a linear variogram model:
## an un-bounded variogram suggests additional source of anisotropy... oh well.
v.fit <- fit.variogram(v, model=vgm(model='Lin' , anis=c(0, 0.5)))

## plot results:
plot(v, model=v.fit, as.table=TRUE)

## update the gstat object:
g <- gstat(g, id="elev", model=v.fit )

Variogram MapVariogram Map

Directional Variogram PlotsDirectional Variogram Plots

Perform OK and View Results:
Examples using standard and lattice graphics.

## perform ordinary kriging prediction:
p <- predict(g, model=v.fit, newdata=grd)

## visualize it:

## base graphics
image(p, col=terrain.colors(20))
contour(p, add=TRUE, drawlabels=FALSE, col='brown')
points(e, pch=4, cex=0.5)
title('OK Prediction')

## lattice graphics: thanks for R. Bivand's advice on this
## alternatively plot quantiles with
## ... col.regions=terrain.colors(6), cuts=quantile(p$elev.pred) ...
pts <- list("sp.points", e, pch = 4, col = "black", cex=0.5)
spplot(p, zcol="elev.pred", col.regions=terrain.colors(20), cuts=19, sp.layout=list(pts), contour=TRUE, labels=FALSE, pretty=TRUE, col='brown', main='OK Prediction')

## plot the kriging variance as well
spplot(p, zcol='elev.var', col.regions=heat.colors(100), cuts=99, main='OK Variance',sp.layout=list(pts) )

## quit and convert saved EPS files to PNG:
## for i in *.eps ; do convert $i `basename $i .eps`.png ; done

OK Prediction: created with the spplot() functionOK Prediction: created with the spplot() function

OK VarianceOK Variance

Point-process modelling with the sp and spatstat packages

Some simple examples of importing spatial data from text files, converting between R datatype, creation of a point process model and evaluating the model. Input data sources are: soil pit locations with mollic and argillic diagnostic horizons (mollic-pedons.txt and argillic-pedons.txt), and a simplified outline of Pinnacles National Monument (pinn.txt). The outline polygon is used to define a window in which all operations are conducted.

The 'sp' package for R contains the function spsample(), can be used to create a sampling plan for a given region of interest: i.e. the creation of n points within that region based on several algorithms. This example illustrates the creation of 50 sampling points within Pinnacles, according to the following criteria: regular (points are located on a regular grid), nonaligned (points are located on a non-aligned grid-like pattern), random (points are located at random), stratified (collectively exhaustive, see details here).

The 'spatstat' package for R contains several functions for creating point-process models: models describing the distribution of point events: i.e. the distribution of tree species within a forest. If covariate data is present (i.e. gridded precipitation, soil moisture, aspect, etc.) these covariates can be incorporated into the point-process model. Without covariate data, the model is based on an spatial distribution estimator function. Note that the development of such models is complicated by factors such as edge-effects, degree of stochasticity, spatial connectivity, and stationarity. These are rather contrived examples, so please remember to read up on any functions you plan to use for your own research. An excellent tutorial on Analyzing spatial point patterns in R was recently published.

Helpful links
Spatstat Quick Reference
Print Version with Links

R: sampling design using the sp packageFour sampling designs

R: spatial density analysis with spatstat packageSpatial density of each pedon type

R: spatial density analysis with spatstat package 2Spatial density of the four sampling designs

R: Example point-process model of mollic soilsExample point-process model of mollic soils

R: Diagnostics of a simple point-process modelDiagnostics of a simple point-process model

Note: This code should be updated to use the slot() function instead of the '@' syntax for accessing slots!

Load required packages and input data (see attached files at bottom of this page)

# load required packages
# read in pinnacles boundary polygon: x,y coordinates
# use the GRASS vector, as it should be topologically correct
# v.out.ascii format=standard in=pinn_bnd > pinn.txt ... edit out extra crud
pinn <- read.table('pinn.txt')
# read in mollic and argillic pedons
# see ogrinfo hack
m <- read.table('mollic-pedons.txt', col.names=c('x','y'))
a <- read.table('argillic-pedons.txt', col.names=c('x','y'))
# add a flag for the type of pedon
m$type <- factor('M')
a$type <- factor('A')
#combine into a single dataframe
pedons <- rbind.data.frame(a,m)

Using the functions from the 'sp' package create a polygon object from the pinn.txt coordinates

# create a spatial polygon class object
pinn.poly <- SpatialPolygons(list(Polygons(list(Polygon( pinn )), "x")))
# inspect this new object with str()
# total area of all polygons
# coordinates of first polygon: this is rough syntax!

Generate a sampling plan for 50 sites using regular grid, non-aligned grid, random, and random stratified approaches

# generate random points within the pinnacled boundary
p.regular <- spsample(pinn.poly, n = 50, "regular")
p.nalign <- spsample(pinn.poly, n = 50, "nonaligned")
p.random <- spsample(pinn.poly, n = 50, "random")
p.stratified <- spsample(pinn.poly, n = 50, "stratified")
# setup plot environment
# each of the sampling designs
points(p.regular, pch=16, col='red', cex=0.3)
points(p.nalign, pch=16, col='red', cex=0.3)
points(p.random, pch=16, col='red', cex=0.3)
points(p.stratified, pch=16, col='red', cex=0.3)

Convert 'sp' class objects to 'spatstat' analogues note the use of 'slots'

# pinn boundary:
# extract coordinates: and get a length - 1 value
p.temp <- pinn.poly@polygons[[1]]@Polygons[[1]]@coords
n <- length(p.temp[,1]) - 1
# create two vectors: x and y
# these will contain the reversed vertices, minus the last point
# in order to adhere to the spatstat specs: no repeating points, in counter-clockwise order
x <- rev(p.temp[,1][1:n])
y <- rev(p.temp[,2][1:n])
# make a list of coordinates: note that we are removing the last vertex
p.list <- list(x=x,y=y)
# make a spatstat window object from the polygon vertices
W <- owin(poly=p.list)
# pedons with their 'marks' i.e. pedon type, and the pinn boundary as the 'window'
pedons.ppp <- ppp(pedons$x, pedons$y, xrange=c(min(pedons$x), max(pedons$x)), yrange=c(min(pedons$y), max(pedons$y)) , window=W, marks=pedons$type)

Plot and summarize the new combined set of pedon data

# plot and summarize the pedons data:
# note the method used to subset the two 'marks'
plot(density.ppp(pedons.ppp[pedons.ppp$marks == 'M']), main="Mollic Point Density")
points(pedons.ppp[pedons.ppp$marks == 'M'], cex=0.2, pch=16)
plot(density.ppp(pedons.ppp[pedons.ppp$marks == 'A']), main="Argillic Point Density")
points(pedons.ppp[pedons.ppp$marks == 'A'], cex=0.2, pch=16)

Marked planar point pattern: 151 points
Average intensity 1.38e-06 points per unit area
frequency proportion intensity
A 62 0.411 5.67e-07
M 89 0.589 8.14e-07
Window: polygonal boundary
single connected closed polygon with 309 vertices
enclosing rectangle: [ 657228.3125 , 670093.8125 ] x [ 4030772.75 , 4047986.25 ]
Window area = 109337135.585938

Convert the sampling design points (from above) to 'spatstat' objects and plot their density

# convert the random datasets: using the same window:
ppp.regular <- ppp(p.regular@coords[,1], p.regular@coords[,2], window=W)
ppp.nalign <- ppp(p.nalign@coords[,1], p.nalign@coords[,2], window=W)
ppp.random <- ppp(p.random@coords[,1], p.random@coords[,2], window=W)
ppp.stratified <- ppp(p.stratified@coords[,1], p.stratified@coords[,2], window=W)
# visually check density of random points:
plot(density.ppp(ppp.regular), main="Regular Sampling")
points(ppp.regular, pch=16, cex=0.2)
plot(density.ppp(ppp.nalign), main="Non-Aligned Sampling")
points(ppp.nalign, pch=16, cex=0.2)
plot(density.ppp(ppp.random), main="Random Sampling")
points(ppp.random, pch=16, cex=0.2)
plot(density.ppp(ppp.stratified), main="Stratified Sampling")
points(ppp.stratified, pch=16, cex=0.2)

Simple, and probably flawed attempt to use a point-process model for the pedon data Third order polynomial model for the distribution of pedons with a mollic epipedon. See manula page for ppm() for detailed examples.

# model the spatial occurance of Mollic epipedons with a 3rd-order polynomial, using the Poisson Process Theory
fit <- ppm( unmark(pedons.ppp[pedons.ppp$marks == 'M']), ~polynom(x,y,3), Poisson())
# view the fitted model
plot(unmark(pedons.ppp[pedons.ppp$marks == 'M']), main="Mollic Pedons")
# plot some diagnostics on the fitted model: Pearson residuals (see references)
diagnose.ppm(fit, type="pearson")

#another example using a buil-in dataset: the Lansing Forest
# fit nonstationary marked Poisson process
# with different log-cubic trend for each species
fit <- ppm(lansing, ~ marks * polynom(x,y,3), Poisson())

Point-process model diagnostic references from the spatstat manual
Baddeley, A., Turner, R., Moller, J. and Hazelton, M. (2005) Residual analysis for spatial point processes. Journal of the Royal Statistical Society, Series B 67, 617–666.
Stoyan, D. and Grabarnik, P. (1991) Second-order characteristics for stochastic structures connected with Gibbs point processes. Mathematische Nachrichten, 151:95–100.

Simple Map Creation


map('county', 'ca', interior=TRUE)

## add some user data in lon/lat format:
points(x, pch=4, cex=0.5, col=1)
points(y, pch=4, cex=0.5, col=2)
points(z, pch=4, cex=0.5, col=3)

Example MapExample Map

Some Ideas on Interpolation of Categorical Data



Wanted to make something akin to an interpolated surface for some spatially auto-correlated categorical data (presence/absence). I quickly generated some fake spatially auto-correlated data to work with using r.surf.fractal in GRASS. These data were converted into a binary map using an arbitrary threshold that looked about right-- splitting the data into something that looked 'spatially clumpy'.

Categorical Interpolation 1: Simulated auto-correlated, categorical variable, with sampling points and derived voronoi polygons.Fig. 1: Simulated auto-correlated, categorical variable, with sampling points and derived voronoi polygons.

I had used voronoi polygons in the past to display connectivity of categorical data recorded at points, even though sparsely sampled areas tend to be over emphasized. Figure 1 shows the fake spatially auto-correlated data (grey = presence /white = not present), sample points (yellow boxes), and voronoi polygons. The polygons with thicker, red boundaries represent the "voronoi interpolation" of the categorical feature.


Interpolation by RST

Wanting something a little more interesting, I tried interpolating the presence/absence data by via RST. Numerical interpolation of categorical data is usually not preferred as it creates a continuum between discreet classes-- i.e. values that do not have a sensible interpretation. Throwing that aside for the sake of making a neat map, a color scheme was selected to emphasize the categorical nature of the interpolated surface (Figure 2).

Categorical Interpolation 2: RST interpolation of 0-1 continuum: red=1, blue=0.Fig. 2: RST interpolation of 0-1 continuum: red=1, blue=0.


Conditional Indicator Simulation

Finally, I gave conditional indicator simulation a try-- this required two steps: 1) fitting a model variogram, 2) simulation. This approach generates different output on each simulation, however, the output represents the original spatial pattern and variability. A more interesting map could be generated by running 1000 simulations and converting them into a single probability map.

Indicator Variogram: Empirical semi-variogram for indicator=1, with spherical model fit.Indicator Variogram: Empirical semi-variogram for indicator=1, with spherical model fit.

Categorical Interpolation 3: Single conditional indicator simulation.Categorical Interpolation 3: Single conditional indicator simulation.



Categorical Interpolation 1: Simulated auto-correlated, categorical variable, with sampling points and derived voronoi polygons. Categorical Interpolation 2: RST interpolation of 0-1 continuum: red=1, blue=0. Categorical Interpolation 3: Single conditional indicator simulation.


Code Snippets

Generate Some Data in GRASS

# set a reasonable resolution
g.region res=10 -ap

# simulate some spatially auto-correlated data
# and convert to categorical map
r.surf.fractal --o dimension=2.5 out=fractal
r.mapcalc "fractal.bin = if(fractal > 0, 1, 0)"
r.colors fractal.bin color=rules <<EOF
0 white
1 grey

v.random --o out=v n=100
v.db.addtable map=v
v.db.addcol map=v column="fractal double, class integer"
v.what.rast vect=v rast=fractal column=fractal
v.what.rast vect=v rast=fractal.bin column=class

# simplest approach
v.voronoi --o in=v out=v_vor

# try interpolation of classes...
v.surf.rst --o in=v zcol=class elev=v.interp
r.colors map=v.interp color=rules <<EOF
0% blue
0.5 white
100% red

Perform Indicator Simulation in R

# indicator simulation

# read vector
d <- readVECT6('v')

# convert class to factor
d@data <- transform(d@data, class=factor(class))

# inspect variogram of x$class == 1
plot(v <- variogram(I(class == 1) ~ 1, data = d))

# fit a spherical variogram with nugget
# not sure about the syntax
v.fit <- fit.variogram(v, vgm(psill=1, model='Sph', range='250', 1))

# png(file='indicator_variogram.png', width=400, height=400, bg='white')
plot(v, model=v.fit)
# dev.off()

# make a grid to predict onto
G <- gmeta6()
grd <- gmeta2grd()

# new grid
new_data <- SpatialGridDataFrame(grd, data=data.frame(k=rep(1,G$cols*G$rows)), proj4string=CRS(d@proj4string@projargs))

# conditional indicator simulation:
# need to study this syntax
# make more simulations for an estimated probabilitry
p <- krige(I(class == 1) ~ 1, d, new_data, v.fit, nsim=1, indicators=TRUE, nmax=40)

# write one back to GRASS
writeRAST6(p, 'indicator.sim', zcol='sim1')

Make Some Maps in GRASS

# fix colors of the simulated map
r.colors map=indicator.sim color=rules << EOF
0 white
1 grey

# simple maps
d.rast v.interp
d.vect v icon=basic/box  size=7 fcol=yellow
d.vect v_vor type=area fcol=none where=class=0
d.vect v_vor type=area fcol=none width=2 where=class=1
d.out.file --o out=example1

d.rast fractal.bin
d.vect v icon=basic/box  size=7 fcol=yellow
d.vect v_vor type=area fcol=none where=class=0
d.vect v_vor type=area fcol=none col=red width=2 where=class=1
d.out.file --o out=example2

d.vect v_vor type=area fcol=white where=class=0
d.vect v_vor type=area fcol=grey where=class=1
d.vect v icon=basic/box  size=7 fcol=yellow
d.out.file --o out=example3

d.rast indicator.sim
d.vect v icon=basic/box size=7 fcol=yellow
d.out.file --o out=example4

Target Practice and Spatial Point Process Models

Simple application of spatial point-process models to spread patterns after some backyard target practice. Note that only a cereal box and 2 sheets of graph paper were injured in this exercise. Data files are attached at the bottom of this page; all distance units are in cm.

A simple experiment was conducted, solely for the purpose of collecting semi-random coordinates on a plane, where a target was hit with 21 shots from a distance of 15 and 30 feet. The ppm() function (spatstat package) in R was used to create point density maps, along with a statistical description of the likelihood of where each target would be hit were the experiment to be conducted again (via point-process modeling). While normally used to model the occurrence of natural phenomena or biological entities, point-process models can be used to analyze one's relative accuracy at set target distances. One more way in which remote sensing or GIS techniques can be applied to smaller, non-georeferenced coordinate systems.

Density ComparisonDensity ComparisonPattern densities from the two experiments: 30 and 15 feet from target.

Load Data and Compute Density Maps:

### load some libraries

## read in the data
t_30 <- read.csv('target_30.csv')
t_15 <- read.csv('target_15.csv')

## an initial plot
plot(t_30, xlim=c(0,35), ylim=c(0,50))
points(t_15, col='red')

## convert to spatstat objects
t_30.ppp <- ppp(t_30$x, t_30$y, xrange=c(0,35), yrange=c(0,50) )
t_15.ppp <- ppp(t_15$x, t_15$y, xrange=c(0,35), yrange=c(0,50) )

## check via plot
points(t_15.ppp, col='red')

Fit Point-Process Models:

## fit point-process model
t_30_fit <- ppm(t_30.ppp, ~polynom(x,y,3), Poisson())
t_15_fit <- ppm(t_15.ppp, ~polynom(x,y,3), Poisson())

## plot density comparisons between two ranges
plot( density(t_30.ppp), col=brewer.pal('Blues', n=9), main="30 Feet")
points(t_30.ppp, pch=4, cex=1)

plot( density(t_15.ppp), col=brewer.pal('Oranges', n=9), main="15 Feet")
points(t_15.ppp, pch=4, cex=1)

## plot a fit of the 30 foot pattern
plot( density(t_30.ppp), col=brewer.pal('Blues', n=9), main="30 Feet")
points(t_30.ppp, pch=4, cex=1)

plot(t_30_fit, col=brewer.pal('Blues', n=9), trend=TRUE, cif=FALSE, pause=FALSE, how="image")
plot(t_30_fit, trend=TRUE, cif=FALSE, pause=FALSE, how="contour")
plot(t_30_fit, colmap=brewer.pal('Blues', n=9), trend=TRUE, cif=FALSE, pause=FALSE, how="persp", theta=0, phi=45)

## plot a fit of the 15 foot pattern
plot( density(t_15.ppp), col=brewer.pal('Oranges', n=9), main="15 Feet")
points(t_15.ppp, pch=4, cex=1)

plot(t_15_fit, col=brewer.pal('Oranges', n=9), trend=TRUE, cif=FALSE, pause=FALSE, how="image")
plot(t_15_fit, trend=TRUE, cif=FALSE, pause=FALSE, how="contour")
plot(t_15_fit, colmap=brewer.pal('Oranges', n=9), trend=TRUE, cif=FALSE, pause=FALSE, how="persp", theta=0, phi=45)

30 Foot PPM30 Foot PPM

15 Foot PPM15 Foot PPM


## convert to png:
for i in *.pdf ; do convert -density 300 +antialias $i `basename $i .pdf`.png ; done
for i in *.png ; do mogrify -reisize 25% $i ; done

Tips on Using the 'sp' classes in R

The sp package provides several useful classes and methods for working with spatial data in R. The documentation is complete for the main set of classes and methods, however there are few listing of extensive examples that may cover some of the less frequently encountered situations. Some examples of where the complexity of the sp classes can cause confusion include:

  • dealing with NA values
  • preserving NA values during import/export
  • replacing data attached to an sp class object
  • plotting several spatial data at once

In addition, there are still some minor bugs to be worked out in terms of reading/writing vector data using methods in the rgdal package. This is currently being worked on by members of the sp and GRASS development community.

Joining New Data to an Existing sp Object

# use to read in some vector data

# read something in, rows are identified by a column called 'id'
spatial.data <- readOGR(...)

# read in some tabular data, rows are identified by a column called 'id'
new_table <- read.csv(...)

# 'join' the new data with merge()
# all.x=TRUE is used to ensure we have the same number of rows after the join
# in case that the new table has fewer
merged <- merge(x=spatial.data@data, y=new_table, by.x='id', by.y='id', all.x=TRUE)

# generate a vector that represents the original ordering of rows in the sp object
correct.ordering <- match(spatial.data@data$id, merged$id)

# overwrite the original dataframe with the new merged dataframe, in the correct order
spatial.data@data <- merged[correct.ordering, ]

# check the ordering of the merged data, with the original spatial data
cbind(spatial.data@data$id, merged$id[correct.ordering])

Correctly Write 'NA' Values to Shapefile [bug in writeOGR()]

# libraries we need

# pass 1: write the shapefile
writeOGR(spatial.data, dsn='new_folder', driver='ESRI Shapefile', layer='new_layer')

# re-make the DBF:
write.dbf(spatial.data@data, file='new_folder/new_layer.dbf')

Handle NA in Several Raster Images (when all maps share the same NA spatial distribution)

# need this to read in GRASS data

# read in several rasters into a new sp object:
x <- readRAST6(c('r1','r2','r3','r4'))

# we want to do some analysis that will not work when all rasters contain NA
# save a vector of positions where all of the rasters have a value of NA
x.nas <- which(is.na(x@data$r1) & is.na(x@data$r2) & is.na(x@data$r3) & is.na(x@data$r4))

# save a vector of positions that have no NA in any raster
x.vals <- which( !is.na(x@data$r1) & !is.na(x@data$r2) & !is.na(x@data$r3) & !is.na(x@data$r4))

# extract the records that only have complete data (no NA)
x.no.na <- x@data[x.vals, ]

# do something with the data
x.clara <- clara(x.no.na, k=6, stand=TRUE)

# add a new column to the original sp object to contain the results of the analysis
x@data$terrain_group <- NA

# fill in the locations where there were complete records with the results
# note that we use the original locations of the non-NA values
x@data$terrain_group[x.vals] <- x.clara$clustering

# save result back to GRASS
writeRAST6(x, zcol='cluster', vname='x_groups')

Handle NA in Several Raster Images (when maps have different NA spatial distribution)

# this one is from R. Bivand (thanks)
# assume that we are working with the 'data' slot on an sp opject:
# generate some data; sprinkle in some NA ; convert to DF
x <- rnorm(100)
x[sample(1:100, 5)] <- NA
x.mat <- round(matrix(x, ncol=5), 2)
x.df <- as.data.frame(x.mat)

# extract an NA mask
x.na_mask <- complete.cases(x.df)
# just the 'rows' with complete data (across bands)

      V1    V2    V3    V4    V5
1   1.32 -1.27  0.00 -0.87  1.52
2  -0.44  1.33  0.78 -1.27  0.00
5   2.70 -0.08 -0.77  0.38  0.32
6  -0.52 -0.07  0.13 -0.43  1.12
7  -0.63  2.02  0.45  0.48 -0.59
8  -0.52 -0.78 -0.59 -0.07 -0.08
11 -1.37 -1.15  0.23  0.73  0.07
12 -0.11 -0.66  0.50 -1.14  0.71
13 -0.38 -0.08  1.00 -0.88 -0.19
14 -1.14 -0.45 -1.37 -0.43 -2.18
15  1.92  0.46 -0.72  0.12  0.27
16  0.51  0.22 -0.39 -1.54 -1.04
17  1.14 -1.11  1.04  0.00 -1.11
18  0.53  0.62 -0.10 -0.17  0.15
19  0.84  0.27 -1.01  0.87  0.31
20  0.82  0.87  0.46  1.35  0.13

# all the data
      V1    V2    V3    V4    V5
1   1.32 -1.27  0.00 -0.87  1.52
2  -0.44  1.33  0.78 -1.27  0.00
3  -0.31 -0.13 -1.14  2.08    NA
4   0.76    NA -1.20 -0.54 -0.85
5   2.70 -0.08 -0.77  0.38  0.32
6  -0.52 -0.07  0.13 -0.43  1.12
7  -0.63  2.02  0.45  0.48 -0.59
8  -0.52 -0.78 -0.59 -0.07 -0.08
9  -0.38 -0.51  0.44    NA -0.81
10 -1.38    NA -0.65 -0.50    NA
11 -1.37 -1.15  0.23  0.73  0.07
12 -0.11 -0.66  0.50 -1.14  0.71
13 -0.38 -0.08  1.00 -0.88 -0.19
14 -1.14 -0.45 -1.37 -0.43 -2.18
15  1.92  0.46 -0.72  0.12  0.27
16  0.51  0.22 -0.39 -1.54 -1.04
17  1.14 -1.11  1.04  0.00 -1.11
18  0.53  0.62 -0.10 -0.17  0.15
19  0.84  0.27 -1.01  0.87  0.31
20  0.82  0.87  0.46  1.35  0.13

# generate something cool from complete data
p <- princomp(~ . , data=x.df[x.na_mask,])

# assign back to original data
x.df$pca_1[x.na_mask] <-  predict(p)[,1]

Visual Interpretation of Principal Coordinates (of) Neighbor Matrices (PCNM)

Principal Coordinates (of) Neighbor Matrices (PCNM) is an interesting algorithm, developed by P. Borcard and P. Legendre at the University of Montreal, for the multi-scale analysis of spatial structure. This algorithm is typically applied to a distance matrix, computed from the coordinates where some environmental data were collected. The resulting "PCNM vectors" are commonly used to describe variable degrees of possible spatial structure and its contribution to variability in other measured parameters (soil properties, species distribution, etc.)-- essentially a spectral decomposition spatial connectivity. This algorithm has been recently updated by and released as part of the PCNM package for R. Several other implementations of the algorithm exist, however this seems to be the most up-to-date.

Related Presentations and Papers on PCNM

  • http://biol09.biol.umontreal.ca/ESA_SS/Borcard_&_PL_talk.pdf
  • Borcard, D. and Legendre, P. 2002. All-scale spatial analysis of ecological data by means of principal coordinates of neighbour matrices. Ecological Modelling 153: 51-68.
  • Borcard, D., P. Legendre, Avois-Jacquet, C. & Tuomisto, H. 2004. Dissecting the spatial structures of ecologial data at all scales. Ecology 85(7): 1826-1832.

I was interested in using PCNM vectors for soils-related studies, however I encountered some in difficulty interpreting what they really meant when applied to irregularly-spaced site locations. As a demonstration, I generated several (25 to be exact) PCNM vectors from a regular grid of points. Using an example from the PCNM manual page, I have plotted the values of the PCNM vectors at the grid nodes (below). The interpretation of the PCNM vectors derived from a 2D, regular grid is fairly simple: lower order vectors represent regional-scale groupings, higher order vectors represent more local-scale groupings. One thing to keep in mind is that these vectors give us a multi-scale metric for grouping sites, and are not computed by any properties that may have been measured at the sites. The size of the symbols are proportional to the PCNM vectors and the color represents the sign.

PCNM - Regular GridPCNM - Regular Grid

Soil survey operations are rarely conducted on a regular grid, so I re-computed PCNM vectors from the same simulated grid, but after randomly perturbing each site. The resulting map of PCNM vectors is presented below. The patterns are a little complex, but quickly decipherable with guidance from the PCNM vectors derived from a regular grid. Neat!

PCNM - Randomly Perturbed Regular GridPCNM - Randomly Perturbed Regular Grid

R code used to make figures


# fake data
g <- expand.grid(x=1:10, y=1:10)
x.coords <- data.frame(x=g$x, y=g$y)

x.sub.dist <- dist(x.coords[,c('x','y')])
x.sub.pcnm <- PCNM(x.sub.dist, dbMEM=TRUE)

# plot first 25 PCNM vectors
pdf(file='PCNM-grid-example.pdf', width=10, height=10)

for(i in 1:25)
        s.value(x.coords[,c('x','y')], x.sub.pcnm$vectors[,i], clegend=0, sub=paste("PCNM", i), csub=1.5, addaxes=FALSE, origin=c(1,1))


# jitter the same input and try again
x.coords <- data.frame(x=jitter(g$x, factor=2), y=jitter(g$y, factor=2))
x.sub.dist <- dist(x.coords[,c('x','y')])
x.sub.pcnm <- PCNM(x.sub.dist, dbMEM=TRUE)

# plot first 25 PCNM vectors
pdf(file='PCNM-jittered_grid-example.pdf', width=10, height=10)

for(i in 1:25)
        s.value(x.coords[,c('x','y')], x.sub.pcnm$vectors[,i], clegend=0, sub=paste("PCNM", i), csub=1.5, addaxes=FALSE, origin=c(1,1))


Visualizing Random Fields and Select Components of Spatial Autocorrelation

I have always had a hard time thinking about various parameters associated with random fields and empirical semi-variograms. The gstat package for R has an interesting interface for simulating random fields, based on a semi-variogram model. It is possible to quickly visualize the effect of altering semi-variogram parameters, by "seeding" the random number generator with the same value at each iteration. Of primary interest were visualization of principal axis of anisotropy, semi-variogram sill, and semi-variogram range. The code used to produce the images is included below. For more information on the R implementation of gstat, see the R-sig-GEO mailing list.


# load libraries

# setup a grid
xy <- expand.grid(1:100, 1:100)
names(xy) <- c("x","y")

Demonstration of Anisotropy DirectionDemonstration of Anisotropy Direction

Demonstrate Anisotropy Direction

var.model <- vgm(psill=1, model="Exp", range=15)
sim <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata = xy, nsim = 1)

var.model <- vgm(psill=1, model="Exp", range=15, anis=c(0, 0.5))
sim$sim2 <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata=xy, nsim=1)$sim1

var.model <- vgm(psill=1, model="Exp", range=15, anis=c(45, 0.5))
sim$sim3 <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata=xy, nsim=1)$sim1

var.model <- vgm(psill=1, model="Exp", range=15, anis=c(90, 0.5))
sim$sim4 <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata=xy, nsim=1)$sim1

var.model <- vgm(psill=1, model="Exp", range=15, anis=c(135, 0.5))
sim$sim5 <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata=xy, nsim=1)$sim1

# promote to SP class object
gridded(sim) = ~x+y

new.names <- c('iso', 'aniso 0 deg', 'aniso 45 deg', 'aniso 90 deg', 'aniso 135 deg')
p1 <- spplot(sim, names.attr=new.names, col.regions=topo.colors(100), as.table=TRUE, main="Demonstration of Anisotropy")

Demonstration of Range ParameterDemonstration of Range Parameter

Demonstrate Range Parameter

var.model <- vgm(psill=1, model="Exp", range=1)
sim <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata = xy, nsim = 1)

var.model <- vgm(psill=1, model="Exp", range=5)
sim$sim2 <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata=xy, nsim=1)$sim1

var.model <- vgm(psill=1, model="Exp", range=15)
sim$sim3 <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata=xy, nsim=1)$sim1

var.model <- vgm(psill=1, model="Exp", range=30)
sim$sim4 <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata=xy, nsim=1)$sim1

# promote to SP class object
gridded(sim) = ~x+y

new.names <- c('range = 1', 'range = 5', 'range = 10', 'range = 30')
p2 <- spplot(sim, names.attr=new.names, col.regions=topo.colors(100), as.table=TRUE, main="Demonstration of Range Parameter")

Demonstration of Sill ParameterDemonstration of Sill Parameter

Demonstrate Sill Parameter

var.model <- vgm(psill=0.5, model="Exp", range=15)
sim <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata = xy, nsim = 1)

var.model <- vgm(psill=1, model="Exp", range=15)
sim$sim2 <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata=xy, nsim=1)$sim1

var.model <- vgm(psill=2, model="Exp", range=15)
sim$sim3 <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata=xy, nsim=1)$sim1

var.model <- vgm(psill=4, model="Exp", range=15)
sim$sim4 <- predict(gstat(formula=z~1, locations= ~x+y, dummy=TRUE, beta=0, model=var.model, nmax=20), newdata=xy, nsim=1)$sim1

# promote to SP class object
gridded(sim) = ~x+y

new.names <- c('sill = 0.5', 'sill = 1', 'sill = 2', 'sill = 4')
p3 <- spplot(sim, names.attr=new.names, col.regions=topo.colors(100), as.table=TRUE, main="Demonstration of Sill Parameter")

Comparison of PSA Results: Pipette vs. Laser Granulometer

Soil texture data was collected via pipette and laser granulometer, each horizon from three pedons. This example illustrates a simple approach to comparing the two methods with both standard XY-style scatter plot and on a soil textural triangle. This example uses code in the plotrix package for R, but you could also use this python approach.

The data referenced in these examples are attached at the bottom of this page. The code boxes below represent what a user would type into the R console. Lines prefixed with a '#' are interpreted by R as a comment, and thus ignored. Further visualization examples, using a larger dataset, can be accessed by clicking on the link at the bottom of this page. The goals of this example are:

  • import data into R
  • plot data
  • perform a simple linear regression
  • plot sand, silt, clay data on a textural triangle

Example commands can be directly pasted into the R console, or typed by hand. I would recommend copyinf a single line of example code at a time into the R console, then press the ENTER key. In this way the results of each command will be visible. Remember that the str() function will give you information about an object. Note that in order to load the sample data, you will need to set your working directory in R to the same folder in which you downloaded the sample data. For example: if you downloaded the sample data to your Desktop, you would set your working directory with:

  • on a mac: setwd('~/Desktop')
  • on windows: setwd('C:\path_to_your_desktop') where 'path_to_your_desktop' is the path to the desktop folder

Optionally, you can use the file.choose() command to bring up a standard file selection box. The result of this function can then be pasted into the read.table('....') function, replacing the '...' with the data returned from file.choose().

Sample Plot: Pipette vs. GranulometerSample Plot: Pipette vs. Granulometer

Sample Plot: Soil Textural Triangle: pipette values are in red, granulometer values are in blue.Sample Plot: Soil Textural Triangle: pipette values are in red, granulometer values are in blue. Line segments connect corresponding observations.

Load Required Packages and Input Data

# the package 'plotrix' can be installed with:
# install.packages('plotrix', repos='http://cran.r-project.org', dependencies=TRUE)
# note 1: you can accomplish this through the R-GUI in windows / mac os
# note 2: on UNIX-like systems you will need to start R as superuser to install packages
# load required package

#read in text data: note that they are TAB-DELIMITED: <b>sep='\t'</b>
p <- read.table('psa-pipette.txt', header=T, sep="\t")

#note that the granulometer data is whitesdpace delimeted: i.e. no 'sep=' argument
g <- read.table('gran-psa.txt', header=T)

Initial Comparison of Clay Values See Figure 1

#do some initial comparisons:
plot(p$clay ~ g$clay)

#re-plot with custom settings:
# annotated axis, 0-100% range, plot symbols scaled by 0.8
plot(p$clay ~ g$clay, ylab="Pct. Clay: Pipette Method", xlab="Pct. Clay: Granulometer Method", main="Simple Plot", xlim=c(0,100), ylim=c(0,100), cex=0.8)

#add silt fraction to the SAME plot:
points(p$silt ~ g$silt,  cex=0.8, pch=2)
# add sand fraction to the SAME plot:
points(p$sand ~ g$sand,  cex=0.8, pch=3)

#add a legend:
legend(x=2.7, y=94, legend=c('Clay','Silt','Sand'), pch=1:3, cex=0.8)

#simple linear modeling: add trend lines
abline(lm(p$clay ~ g$clay), col="gray")
abline(lm(p$silt ~ g$silt), col="gray")
abline(lm(p$sand ~ g$sand), col="gray")

Simple Linear Model

#create a formular object
f <- p$clay ~ g$clay
#create a model object
m <- lm( f )
#return the details on the new model:
summary( m )

#the following is the output:

lm(formula = p$clay ~ g$clay)

     Min       1Q   Median       3Q      Max
-13.6120  -6.1412   0.4438   4.8047  19.3997

            Estimate Std. Error t value Pr(>|t|)  
(Intercept)    3.761      6.477   0.581   0.5707  
g$clay         3.052      1.093   2.793   0.0144 *
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 8.664 on 14 degrees of freedom
Multiple R-Squared: 0.3577,     Adjusted R-squared: 0.3119
F-statistic: 7.798 on 1 and 14 DF,  p-value: 0.01439

Sample soil texture data plotted on the texture triangle See Figure 2

#subset sand, silt, clay information for texture triangle plot:
p.ssc <- data.frame(sand=p$sand,silt=p$silt,clay=p$clay)
g.ssc <- data.frame(sand=g$sand,silt=g$silt,clay=g$clay)

#plot a texture triangle from the pipette data
p.tri <- soil.texture(p.ssc,show.lines=T, show.names=T, tick.labels=seq(10,90,10), col.symbol='red', pch=16, cex=0.8, main="Soil Texture Triangle")
#add points from the granulometer data
g.tri  <- triax.points(g.ssc, col.symbol='blue', pch=16, cex=0.8)

#plot segments connecting (also see 'arrows' function)
segments(p.tri$x, p.tri$y, g.tri$x, g.tri$y, col='black', lty=1)

Extended Visualization Ideas and an Expanded Dataset

See attached file 'pipette_vs_granulometer.csv_.txt' at the bottom of this page.

Load Required Packages and Input Data

#the package 'plotrix' can be installed with:
# install.packages('plotrix', repos='http://cran.r-project.org', dependencies=TRUE)
# note 1: you can accomplish this through the R-GUI in windows / mac os
# note 2: on UNIX-like systems you will need to start R as superuser to
# install packages
#load required package

#read in text data: this is a CSV file: sep=","
x <- read.table('pipette_vs_granulometer.csv_.txt', header=T, sep=",")

#subset the data to include only sand, silt, clay columns
p <- data.frame(sand=x$p_sand, silt=x$p_silt, clay=x$p_clay)
g <- data.frame(sand=x$g_sand, silt=x$g_silt, clay=x$g_clay)

Sample 2D Plot: comparison between pipette and laser granulometer sand, silt, and clay valuesSample 2D Plot: comparison between pipette and laser granulometer sand, silt, and clay values

Simple 2D plot of corelation between pipette and granulometer: for sand, silt, clay

#plot with custom settings:
# annotated axis, 0-100% range, plot symbols scaled by 0.8
plot(p$clay ~ g$clay, ylab="Pct. Clay: Pipette Method", xlab="Pct. Clay: Granulometer Method", main="Simple Plot", xlim=c(0,100), ylim=c(0,100), cex=0.6, pch=16, col=1)

#add a 1:1 corrospondance line:
abline(0,1, col="gray", lty=2)

#add silt fraction to the SAME plot:
points(p$silt ~ g$silt,  cex=0.6, pch=16, col=2)
# add sand fraction to the SAME plot:
points(p$sand ~ g$sand,  cex=0.6, pch=16, col=3)

#add a legend:
legend(x=2.7, y=94, legend=c('Clay','Silt','Sand'), col=1:3, pch=16, cex=0.6)

#add locally smoothing estimator lines: lowess(x,y)
lines(lowess(g$clay, p$clay), col=1, lwd=2)
lines(lowess(g$silt, p$silt), col=2, lwd=2)
lines(lowess(g$sand, p$sand), col=3, lwd=2)

Soil Texture Triangle 2: visualization of differences between the two methodsSoil Texture Triangle 2: visualization of differences between the two methods

Compare the two datasets on the textural triangle

#plot soil textures on triangle
p.tri <- soil.texture(p,show.lines=T, show.names=T, col.symbol='red', pch=16, cex=0.7, main="Soil
Texture Triangle"
g.tri <- triax.points(g, col.symbol='blue', pch=16, cex=0.7)

#create dataframe of segment midpoints
mid <- data.frame(x=(p.tri$x + g.tri$x) / 2, y=(p.tri$y + g.tri$y) / 2 )

#plot a lowess function along the midpoints, ordered by x-coordinate
#what does this really mean?
lines( lowess(mid[order(mid$x), ]) , lwd=2, lty=2, col='green')

# plot overall shift: average p.x,p.y ---> avg g.x,g.y
arrows(mean(p.tri$x), mean(p.tri$y), mean(g.tri$x), mean(g.tri$y), len=.1, lwd=2)

#add a legend:
legend(.68,.79, legend=c("pipette","granulometer"), pch=16, cex=0.7, col=c('red','blue'))

# optionally: add clutter
#plot segments connecting related measurements
segments(p.tri$x, p.tri$y, g.tri$x, g.tri$y, col='gray', lty=1)