Estimating Missing Data with aregImpute() {R}

Submitted by dylan on Mon, 2010-04-19 18:02.

 
Missing Data
Soil scientists routinely sample, characterize, and summarize patterns in soil properties in space, with depth, and through time. Invariably, some samples will be lost or sufficient funds required for complete characterization can run out. In these cases the scientist is left with a data table that contains holes (so to speak) in the rows/columns that are missing data. If the data are used within a regression, missing values in any of the predictor or the response variable result in row-wise deletion-- even if 9/10 variables are present. Furthermore, common multivariate methods (PCA, RDA, dissimilarity metrics, etc.) cannot effectively deal with missing data. The scientist is left with a couple options: 1) row-wise deletion of cases missing any variable, 2) re-sampling or re-characterizing the missing samples, or 3) estimating the missing values from other variables in the dataset. This last option is called missing data imputation. This is a broad topic with countless books and scientific papers written about it. Here is a fairly simple introduction to the topic of imputation. Fortunately for us non-experts, there is an excellent function (aregImpute()) in the Hmisc package for R.

Below is an example of filling missing data in a soil characterization database with the aregImpute function. For each missing value, 10 candidate multiple imputations are returned. Otherwise, the function is using default parameters-- there are a lot of options, so reading the manual page is highly recommended! From the example below, it looks like we are able to adequately predict missing observations in most variables-- R2 values are all > 0.5 - 0.6. Note that we are using the aregImpute function to automatically find the "best model" for predicting missing values (for each variable).

 
Implementation

## impute missing data: with aregImpute
# updated version of methods used in transcan
# multiple impution, requesting 10 candidate values per NA
x.ar <- aregImpute(~ L + A + B + clay + silt + sand + ph + fe_d + fe_o + mn_d + mn_o + Fe + Ca + K + Al + Si + Ti + Zr + Rb + S + Zn, data=x, n.impute=10)


#
# R-squares for Predicting Non-Missing Values for Each Variable Using Last Imputations of Predictors
# not bad!
#
    L     A     B  clay  silt  sand    ph  fe_d  fe_o  mn_d  mn_o    Fe    Ca
0.949 0.933 0.934 1.000 1.000 1.000 0.567 0.950 0.597 0.906 0.902 0.913 0.844
    K    Al    Si    Ti    Zr    Rb     S    Zn
0.860 0.839 0.829 0.885 0.886 0.885 0.680 0.730

I am interested in replacing missing data with the mean of the multiple imputations for each case. The following code below demonstrates one possible approach. However, this is not the suggested approach for incorporating the imputed values into subsequent analysis! Regression models should be iteratively fit to data containing a single value of each multiple imputation, and model coefficients combined according to rules for mixture distributions. (Thanks for the tip Cyrus). There are functions within the Hmisc, rms, and Zelig packages for automating these procedures.

 
Implementation (slightly improper use of multiple imputation)

# get a list of those variables with imputed values
imp.vars <- names(x.ar$imputed)

# compute mean imputed value for each variable
# and extract the original indices of the missing data, by variable
imp.vars.mean <- lapply(x.ar$imputed, function(i) apply(i, 1, mean))
imp.vars.idx <- lapply(imp.vars.mean, function(i) as.integer(names(i)))

# copy origial data
x.no.na <- x

# loop over imputed variables
for(i in imp.vars)
        {
        print(i)
        # get the mean imputations for this variable
        imp.i <- imp.vars.mean[[i]]
        # get the original indices for NA
        idx.i <- imp.vars.idx[[i]]
        # replace original NA with imputed values
        x.no.na[idx.i, i] <- imp.i
        }

( categories: )

using the imputed data...

Came across this via R bloggers, and I have to correct something that you're doing: You do not want to use the mean (or median or anything else like that) of the imputations to fill the missing values. What you need to do is to run your analysis on each of the 10 imputed datasets. Then, you combine the results according to rules for a mixture distribution (specifically, you exploit asymptotic normality of parameter estimates, and a mixture of normals is t with certain degrees of freedom). Schafer's FAQ, to which you link, has those rules. You can also implement them with the Zelig package in R (or Clarify in Stata). For output that is not supported by Zelig, I have some combine functions that you can use here:

http://www.columbia.edu/~cds81/

(Look at the bottom right of the page under "Code for some utilities.)

The intuition for why you don't want to use the mean of the imputations is that it is not always true that f(E(x)) = E(f(X)), and it's the latter expectation that you want).

Correction

Hi Cyrus, thanks for the suggestions. I'll make some corrections to the post, and mention the caveats you suggest. In this case, I was looking for an approach to fill missing data based on the "best model" of other, non-missing observations. The aregImpute() does most of the hard work involved with finding that "best model".