Matt Gregory bio photo

Matt Gregory

Data Scientist

Twitter Github

Keras is a high-level neural networks API developed with a focus on enabling fast experimentation (it has a TensorFlow backend). Being able to go from idea to result with the least possible delay is key to doing good research. This blog post celebrates the release of the new The Deep Learning with R book by François Chollet (the creator of Keras) which provides a more comprehensive introduction to both Keras and the concepts and practice of deep learning. The first few chapters are free and are definitely worth checking out.

For this blog post we assume the reader is familiar with neural networks and how they work.

Install the package and load it, then install all the dependencies required. We suggest you visit the links above and go through the MNIST example which we reproduce here (we looked at this dataset before in our t-SNE post). I’m attempting to do this from memory as a way to practise my learning from the book (you can try using just the cheatsheet). Teaching others (through blogs or presentations) is a great way to consolidate your learning of new techniques which will be a theme of your auto-didactic-life-long-learningness critical to your success as a data scientist.

# run the commented code if this is teh first time you've used Keras
 
# devtools::install_github("rstudio/keras")
library(keras)
# install_keras()

My favourite bits to paraphrase from the Deep Learning in R book, is how a neural network is like a multi-stage information-distillation pipeline. With every layer we purify the message from the input, getting more and more dissimilar to the original input but more and more informative to completing a task such as predicting what animal is represented by a bunch of pixels (cat or not). Thus to proceed, we need labelled input data and a loss function (a way to score how well our network is doing at predicting by comparing predictions against “truth” labels).

Keras is like LEGO, it’s at a high enough level so that we can stick our layers together and experiment readily. It also facilitates scalability, as neural networks can take considerable computing effort (given the size of the data), we need to be able to move our computation readily to a GPU or the cloud (AWS can be handy but to reduce some of the GUI faff and complicated setup, check out this databox repo for creating some handy default instances in your terminal).

mnist <- dataset_mnist()
train_images <- mnist$train$x
train_labels <- mnist$train$y
test_images <- mnist$test$x
test_labels <- mnist$test$y

In the above code chunk we loaded the data with its respective labels (i.e. what digit is represented by the handwritten pixel representation).

str(mnist)
## List of 2
##  $ train:List of 2
##   ..$ x: int [1:60000, 1:28, 1:28] 0 0 0 0 0 0 0 0 0 0 ...
##   ..$ y: int [1:60000(1d)] 5 0 4 1 9 2 1 3 1 4 ...
##  $ test :List of 2
##   ..$ x: int [1:10000, 1:28, 1:28] 0 0 0 0 0 0 0 0 0 0 ...
##   ..$ y: int [1:10000(1d)] 7 2 1 0 4 1 4 9 5 9 ...

Tensors and our digit data

As Keras is a TensorFlow frontend, unsurprisingly the data is stored as tensors (multidimensional arrays). You probably are familiar with a one dimensional vector and a two dimensional matrix, thus we can extend to N dimensions using tensor parlance (in tensors people normally say axes rather than dimensions). For example, if you pack matrices (2D tensor) into an array, you create a 3D tensor, or a cube of numbers (interestingly and inconsistently, we just said that axes is preferred but note how “D” is suffixed here!).

dim(train_images)
## [1] 60000    28    28
# we can subset as normal
# train_images[1, , ]

Our training data is a 3D tensor of integers - I thought the digit pixel data would be 2D? Well it is, more precisely, it’s an array of 60,000 matrices of 28 × 28 integers. It’s common when working with tensors that the first axis is the sample number, here samples are images of digits. This can be useful in breaking the data up into “batches”, as it would be hard to process it all at once, and we can take advantage of parallelising the work to speed up computation time. Each sample has an associated matrix that is a gray-scale image, with integers between 0 and 255.

digit <- train_images[1, , ]
plot(as.raster(digit, max = 255))

plot of chunk 2018-03-04-minst5

This looks like a five. We can check my mental model against reality by comparing it to the “truth” label. We are correct, I wonder what our accuracy for all 70,000 data would be?

train_labels[1]
## [1] 5

Gray-scale might be unnecessary detail, we could simplify the problem by setting any non-zero pixel to 1 (thus each pixel is either off or on). We should also convert the labels of the digits to a binary class matrix, a necessary step.

train_images <- array_reshape(train_images, c(60000, 28 * 28))
train_images <- train_images / 255
 
test_images <- array_reshape(test_images, c(10000, 28 * 28))
test_images <- test_images / 255
 
# make cat
train_labels <- to_categorical(train_labels)
test_labels <- to_categorical(test_labels)

Using Keras we can add layers to our network. From our basic knowledge of neural networks we know we need an input layer (the pixels), a hidden layer to distill relevant information from these pixels of the input layer (learning is possible by updating our weights using back propagation based on our loss function and optimiser) which will feed into our final output layer which should have as many nodes as there are classes (ten digits; 0, 1, 2 … 9). See the comments in the code chunk for extra detail.

# Keras Model composed of a linear stack of layers
network <- keras_model_sequential() %>%
  # 28 times 28 equals 784 pixels
  # the units argument gives the dimensionality of the output space
  # a sort of feature space, thus we go from 784 pixels to 512 features
  # we are distilling information in our representational layer
  # Why 512 nodes? Dunno but it's between the input and output layer
  layer_dense(units = 512, activation = "relu", input_shape = c(28*28)) %>%
  # ten digits to output, we use softmax to give ten probability scores
  # summing to one, we can predict the digit with the highest proability
  layer_dense(units = 10, activation = "softmax")

The dense layers are fully connected, there is a direct link between every pair of nodes (for available activation functions see here). We need to train a lot of weights (parameters)!

str(network)
## Model
## ___________________________________________________________________________
## Layer (type)                     Output Shape                  Param #     
## ===========================================================================
## dense_1 (Dense)                  (None, 512)                   401920      
## ___________________________________________________________________________
## dense_2 (Dense)                  (None, 10)                    5130        
## ===========================================================================
## Total params: 407,050
## Trainable params: 407,050
## Non-trainable params: 0
## ___________________________________________________________________________

Prior to fitting the model to the training data we must configure it by specifying what optimiser and loss function to use.

network %>% compile(
  optimizer = "rmsprop",
  loss = "categorical_crossentropy",
  metrics = c("accuracy")
)

We are now ready to fit the model to the training data, for a fixed number of iterations on the dataset, in place. The batch_size specifies the number of samples per gradient update. You can use the view_metrics argument to watch the training in realtime.

set.seed(1337)
 
network %>%
  fit(train_images, train_labels, epochs = 5, batch_size = 128)

Wow! You can see how quickly the accuracy is improved and how fast the process is. On my machine it took about 10 seconds to run with an accuracy of just under 99% on the training data. However, inevitably our model has overfit, let’s assess how it performs on data it hasn’t seen before.

metrics <- network %>% evaluate(test_images, test_labels)
metrics
## $loss
## [1] 0.07282939
## 
## $acc
## [1] 0.9791

Almost 98%, not bad for the “Hello World!” of machine learning vision tasks. It was all put together with a few lines of code.

As usual in R we can make use of predict functionality that could ultimately lead to our model being deployed (albeit the Keras version of it). We can also save it as a JSON or other machine readable format for later use.

network %>% predict_classes(test_images[1:10,])
##  [1] 7 2 1 0 4 1 4 9 5 9

Hospital readmissions prediction

Thus far we have simply copy and pasted an example. Let’s use a dataset we are interested in and attempt to use Keras for prediction of a classification problem based on hospital data from the UCI machine learning repo (due to its size we have not put this in the data folder for machine gurning repo). We use data that is not pre-processed so we can reflect on this important step.

The dataset represents ten years (1999-2008) of clinical care at 130 U.S. hospitals. Hospital readmission for diabetic patients is a major concern in the United States. Predicting those patients that are readmitted within 30 days would be useful for efficiency savings, helping doctors spot patients at risk of being readmitted.

Searching for this dataset on Rpubs reveals someone who has already tidied it somewhat, so we built on and adapted their work to our purpose. We also inspected the file by opening it and checking for any dodgy codings of “NA”, we found “?” was used and some variables had mostly missing data.

# author note, use getwwd() when writing to check relative address
# different relative address in interactive use and when knitting
diabetes <- readr::read_csv("./data/dataset_diabetes/diabetic_data.csv",
                            na = c("", "NA", "?", "Unknown/Invalid"))
 
# review missingness patterns
extracat::visna(diabetes, type = "b")

plot of chunk 2018-03-04-visna

Race, weight and a few other variables are responsible for most of the missing data, as shown above (see here for help interpreting this figure). We drop them from our dataset.

For simplicity, we drop encounter_id and patient_nbr and assume that every row represents a unique admission to the hospital (not true of course, we could use a recurrent neural network to learn from historical experience of the patient but we simplify things for now).

library(dplyr)
diab_df <- diabetes %>%
  select(-race, -weight, -payer_code,
         -medical_specialty, -diag_1:-diag_3) %>%
  select(-encounter_id:-patient_nbr) %>%
  janitor::clean_names()
 
glimpse(diab_df)
## Observations: 101,766
## Variables: 41
## $ gender                   <chr> "Female", "Female", "Female", "Male",...
## $ age                      <chr> "[0-10)", "[10-20)", "[20-30)", "[30-...
## $ admission_type_id        <int> 6, 1, 1, 1, 1, 2, 3, 1, 2, 3, 1, 2, 1...
## $ discharge_disposition_id <int> 25, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, ...
## $ admission_source_id      <int> 1, 7, 7, 7, 7, 2, 2, 7, 4, 4, 7, 4, 7...
## $ time_in_hospital         <int> 1, 3, 2, 2, 1, 3, 4, 5, 13, 12, 9, 7,...
## $ num_lab_procedures       <int> 41, 59, 11, 44, 51, 31, 70, 73, 68, 3...
## $ num_procedures           <int> 0, 0, 5, 1, 0, 6, 1, 0, 2, 3, 2, 0, 0...
## $ num_medications          <int> 1, 18, 13, 16, 8, 16, 21, 12, 28, 18,...
## $ number_outpatient        <int> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ number_emergency         <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1...
## $ number_inpatient         <int> 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ number_diagnoses         <int> 1, 9, 6, 7, 5, 9, 7, 8, 8, 8, 9, 7, 8...
## $ max_glu_serum            <chr> "None", "None", "None", "None", "None...
## $ a1cresult                <chr> "None", "None", "None", "None", "None...
## $ metformin                <chr> "No", "No", "No", "No", "No", "No", "...
## $ repaglinide              <chr> "No", "No", "No", "No", "No", "No", "...
## $ nateglinide              <chr> "No", "No", "No", "No", "No", "No", "...
## $ chlorpropamide           <chr> "No", "No", "No", "No", "No", "No", "...
## $ glimepiride              <chr> "No", "No", "No", "No", "No", "No", "...
## $ acetohexamide            <chr> "No", "No", "No", "No", "No", "No", "...
## $ glipizide                <chr> "No", "No", "Steady", "No", "Steady",...
## $ glyburide                <chr> "No", "No", "No", "No", "No", "No", "...
## $ tolbutamide              <chr> "No", "No", "No", "No", "No", "No", "...
## $ pioglitazone             <chr> "No", "No", "No", "No", "No", "No", "...
## $ rosiglitazone            <chr> "No", "No", "No", "No", "No", "No", "...
## $ acarbose                 <chr> "No", "No", "No", "No", "No", "No", "...
## $ miglitol                 <chr> "No", "No", "No", "No", "No", "No", "...
## $ troglitazone             <chr> "No", "No", "No", "No", "No", "No", "...
## $ tolazamide               <chr> "No", "No", "No", "No", "No", "No", "...
## $ examide                  <chr> "No", "No", "No", "No", "No", "No", "...
## $ citoglipton              <chr> "No", "No", "No", "No", "No", "No", "...
## $ insulin                  <chr> "No", "Up", "No", "Up", "Steady", "St...
## $ glyburide_metformin      <chr> "No", "No", "No", "No", "No", "No", "...
## $ glipizide_metformin      <chr> "No", "No", "No", "No", "No", "No", "...
## $ glimepiride_pioglitazone <chr> "No", "No", "No", "No", "No", "No", "...
## $ metformin_rosiglitazone  <chr> "No", "No", "No", "No", "No", "No", "...
## $ metformin_pioglitazone   <chr> "No", "No", "No", "No", "No", "No", "...
## $ change                   <chr> "No", "Ch", "No", "Ch", "Ch", "No", "...
## $ diabetesmed              <chr> "No", "Yes", "Yes", "Yes", "Yes", "Ye...
## $ readmitted               <chr> "NO", ">30", "NO", "NO", "NO", ">30",...

Look at the variables. Age is not useful in its current format, let’s convert it into something infomrative Some of the variables / features are of the wrong type. We do some basic tidying below, with the comments explaining what’s happening. We could also benefit from converting to dummy variables for modelling purposes. The caret package can help with this.

# fix age
diab_df$age <- gsub("[()]", "", diab_df$age)
diab_df$age <- gsub("[[]]", "", diab_df$age)
diab_df$age <- gsub("\\[", "", diab_df$age)
 
# split string to remove hyphen and convert into numeric
# create new variables then drop uninformative age variables
diab_df$age_low <- as.numeric(stringr::str_split(diab_df$age, "-", simplify = TRUE)[, 1])
diab_df$age_high <- as.numeric(stringr::str_split(diab_df$age, "-", simplify = TRUE)[, 2])
 
diab_df$age_mid <- round((diab_df$age_low + 
                          diab_df$age_high) / 2, digits = 0)
 
# drop redundant, keep age mid
diab_df_aged <- dplyr::select(diab_df,
                       -age, -age_low, -age_high)
 
# create target / response variable, make binary
diab_df_aged$target <- dplyr::if_else(diab_df_aged$readmitted == "<30", 1, 0)
# drop redundant readmitted, could use later for multiclass prediction
readmitted <- diab_df_aged$readmitted  # store for later
diab_df_tgt <- dplyr::select(diab_df_aged, -readmitted)
 
# glimpse(diab_df_tgt)
 
# We can examine these character variables more closely using
# CaVEAt: table is slow
# for (i in 16:41) {
#   print("-------------------------")
# 
#   print(names(diab_df_tgt[, i]))
#   print(table(diab_df_tgt[, i]))
#   
#   print("-------------------------")
#   }
 
# Quite a few are uninformative (i.e. they have zero variance),
# there's too few instances to learn from
# and some are "No" for all samples, again uninformative
# you might as well include a feature that gives their species
# Googling revealed caret has ?nearZeroVar() but we have some char
# thus we copy a functino from stacoverflow
# https://stackoverflow.com/questions/8805298/quickly-remove-zero-variance-variables-from-a-data-frame
 
rm_zero_var_features <- function(dat) {
    out <- lapply(dat, function(x) length(unique(x)))
    want <- which(!out > 1)
    unlist(want)
}
 
# name variables which are zero variance
diab_df_useless2drop <- rm_zero_var_features(diab_df_tgt)
# drop them
diab_df_useless_dropped <- diab_df_tgt[ , c(-diab_df_useless2drop[1],
                                            -diab_df_useless2drop[2])]
 
glimpse(diab_df_useless_dropped)
## Observations: 101,766
## Variables: 39
## $ gender                   <chr> "Female", "Female", "Female", "Male",...
## $ admission_type_id        <int> 6, 1, 1, 1, 1, 2, 3, 1, 2, 3, 1, 2, 1...
## $ discharge_disposition_id <int> 25, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, ...
## $ admission_source_id      <int> 1, 7, 7, 7, 7, 2, 2, 7, 4, 4, 7, 4, 7...
## $ time_in_hospital         <int> 1, 3, 2, 2, 1, 3, 4, 5, 13, 12, 9, 7,...
## $ num_lab_procedures       <int> 41, 59, 11, 44, 51, 31, 70, 73, 68, 3...
## $ num_procedures           <int> 0, 0, 5, 1, 0, 6, 1, 0, 2, 3, 2, 0, 0...
## $ num_medications          <int> 1, 18, 13, 16, 8, 16, 21, 12, 28, 18,...
## $ number_outpatient        <int> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ number_emergency         <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1...
## $ number_inpatient         <int> 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ number_diagnoses         <int> 1, 9, 6, 7, 5, 9, 7, 8, 8, 8, 9, 7, 8...
## $ max_glu_serum            <chr> "None", "None", "None", "None", "None...
## $ a1cresult                <chr> "None", "None", "None", "None", "None...
## $ metformin                <chr> "No", "No", "No", "No", "No", "No", "...
## $ repaglinide              <chr> "No", "No", "No", "No", "No", "No", "...
## $ nateglinide              <chr> "No", "No", "No", "No", "No", "No", "...
## $ chlorpropamide           <chr> "No", "No", "No", "No", "No", "No", "...
## $ glimepiride              <chr> "No", "No", "No", "No", "No", "No", "...
## $ acetohexamide            <chr> "No", "No", "No", "No", "No", "No", "...
## $ glipizide                <chr> "No", "No", "Steady", "No", "Steady",...
## $ glyburide                <chr> "No", "No", "No", "No", "No", "No", "...
## $ tolbutamide              <chr> "No", "No", "No", "No", "No", "No", "...
## $ pioglitazone             <chr> "No", "No", "No", "No", "No", "No", "...
## $ rosiglitazone            <chr> "No", "No", "No", "No", "No", "No", "...
## $ acarbose                 <chr> "No", "No", "No", "No", "No", "No", "...
## $ miglitol                 <chr> "No", "No", "No", "No", "No", "No", "...
## $ troglitazone             <chr> "No", "No", "No", "No", "No", "No", "...
## $ tolazamide               <chr> "No", "No", "No", "No", "No", "No", "...
## $ insulin                  <chr> "No", "Up", "No", "Up", "Steady", "St...
## $ glyburide_metformin      <chr> "No", "No", "No", "No", "No", "No", "...
## $ glipizide_metformin      <chr> "No", "No", "No", "No", "No", "No", "...
## $ glimepiride_pioglitazone <chr> "No", "No", "No", "No", "No", "No", "...
## $ metformin_rosiglitazone  <chr> "No", "No", "No", "No", "No", "No", "...
## $ metformin_pioglitazone   <chr> "No", "No", "No", "No", "No", "No", "...
## $ change                   <chr> "No", "Ch", "No", "Ch", "Ch", "No", "...
## $ diabetesmed              <chr> "No", "Yes", "Yes", "Yes", "Yes", "Ye...
## $ age_mid                  <dbl> 5, 15, 25, 35, 45, 55, 65, 75, 85, 95...
## $ target                   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1...
# now that we removed these variables which were zero var
# should be able to use dummyVars
library(caret)
# dummify data
dmy <- dummyVars( ~ ., data = diab_df_useless_dropped)
trsf <- data.frame(predict(dmy, newdata = diab_df_useless_dropped))
glimpse(trsf)
## Observations: 101,766
## Variables: 96
## $ genderFemale                   <dbl> 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1...
## $ genderMale                     <dbl> 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0...
## $ admission_type_id              <dbl> 6, 1, 1, 1, 1, 2, 3, 1, 2, 3, 1...
## $ discharge_disposition_id       <dbl> 25, 1, 1, 1, 1, 1, 1, 1, 1, 3, ...
## $ admission_source_id            <dbl> 1, 7, 7, 7, 7, 2, 2, 7, 4, 4, 7...
## $ time_in_hospital               <dbl> 1, 3, 2, 2, 1, 3, 4, 5, 13, 12,...
## $ num_lab_procedures             <dbl> 41, 59, 11, 44, 51, 31, 70, 73,...
## $ num_procedures                 <dbl> 0, 0, 5, 1, 0, 6, 1, 0, 2, 3, 2...
## $ num_medications                <dbl> 1, 18, 13, 16, 8, 16, 21, 12, 2...
## $ number_outpatient              <dbl> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0...
## $ number_emergency               <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ number_inpatient               <dbl> 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0...
## $ number_diagnoses               <dbl> 1, 9, 6, 7, 5, 9, 7, 8, 8, 8, 9...
## $ max_glu_serum.200              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ max_glu_serum.300              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ max_glu_serumNone              <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ max_glu_serumNorm              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ a1cresult.7                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ a1cresult.8                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ a1cresultNone                  <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ a1cresultNorm                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ metforminDown                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ metforminNo                    <dbl> 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1...
## $ metforminSteady                <dbl> 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0...
## $ metforminUp                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ repaglinideDown                <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ repaglinideNo                  <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ repaglinideSteady              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ repaglinideUp                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ nateglinideDown                <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ nateglinideNo                  <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ nateglinideSteady              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ nateglinideUp                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ chlorpropamideDown             <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ chlorpropamideNo               <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ chlorpropamideSteady           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ chlorpropamideUp               <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glimepirideDown                <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glimepirideNo                  <dbl> 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1...
## $ glimepirideSteady              <dbl> 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0...
## $ glimepirideUp                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ acetohexamideNo                <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ acetohexamideSteady            <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glipizideDown                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glipizideNo                    <dbl> 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1...
## $ glipizideSteady                <dbl> 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0...
## $ glipizideUp                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glyburideDown                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glyburideNo                    <dbl> 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1...
## $ glyburideSteady                <dbl> 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0...
## $ glyburideUp                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ tolbutamideNo                  <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ tolbutamideSteady              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ pioglitazoneDown               <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ pioglitazoneNo                 <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ pioglitazoneSteady             <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ pioglitazoneUp                 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ rosiglitazoneDown              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ rosiglitazoneNo                <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1...
## $ rosiglitazoneSteady            <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0...
## $ rosiglitazoneUp                <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ acarboseDown                   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ acarboseNo                     <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ acarboseSteady                 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ acarboseUp                     <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ miglitolDown                   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ miglitolNo                     <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ miglitolSteady                 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ miglitolUp                     <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ troglitazoneNo                 <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ troglitazoneSteady             <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ tolazamideNo                   <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ tolazamideSteady               <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ tolazamideUp                   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ insulinDown                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ insulinNo                      <dbl> 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0...
## $ insulinSteady                  <dbl> 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1...
## $ insulinUp                      <dbl> 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0...
## $ glyburide_metforminDown        <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glyburide_metforminNo          <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ glyburide_metforminSteady      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glyburide_metforminUp          <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glipizide_metforminNo          <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ glipizide_metforminSteady      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glimepiride_pioglitazoneNo     <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ glimepiride_pioglitazoneSteady <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ metformin_rosiglitazoneNo      <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ metformin_rosiglitazoneSteady  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ metformin_pioglitazoneNo       <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ metformin_pioglitazoneSteady   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ changeCh                       <dbl> 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0...
## $ changeNo                       <dbl> 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1...
## $ diabetesmedNo                  <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ diabetesmedYes                 <dbl> 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ age_mid                        <dbl> 5, 15, 25, 35, 45, 55, 65, 75, ...
## $ target                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
# this has dropped the target variable for some reason
# corrected, we did have target before the ~ causing it to be dropped
# interesting

This is problematic as we have falling foul of the dummy variable trap; we have perfect collinearity between some variables, for example genderFemale and genderMale, if we know one we know the other (see here for more details). The fullRank parameter is worth protects us against this. Let’s turn on fullRank and try our data frame again.

# repeat but with fullrank True
dmy <- dummyVars( ~ ., data = diab_df_useless_dropped,
                 fullRank = TRUE)
trsf <- data.frame(predict(dmy, newdata = diab_df_useless_dropped))
glimpse(trsf)
## Observations: 101,766
## Variables: 70
## $ genderMale                     <dbl> 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0...
## $ admission_type_id              <dbl> 6, 1, 1, 1, 1, 2, 3, 1, 2, 3, 1...
## $ discharge_disposition_id       <dbl> 25, 1, 1, 1, 1, 1, 1, 1, 1, 3, ...
## $ admission_source_id            <dbl> 1, 7, 7, 7, 7, 2, 2, 7, 4, 4, 7...
## $ time_in_hospital               <dbl> 1, 3, 2, 2, 1, 3, 4, 5, 13, 12,...
## $ num_lab_procedures             <dbl> 41, 59, 11, 44, 51, 31, 70, 73,...
## $ num_procedures                 <dbl> 0, 0, 5, 1, 0, 6, 1, 0, 2, 3, 2...
## $ num_medications                <dbl> 1, 18, 13, 16, 8, 16, 21, 12, 2...
## $ number_outpatient              <dbl> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0...
## $ number_emergency               <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ number_inpatient               <dbl> 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0...
## $ number_diagnoses               <dbl> 1, 9, 6, 7, 5, 9, 7, 8, 8, 8, 9...
## $ max_glu_serum.300              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ max_glu_serumNone              <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ max_glu_serumNorm              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ a1cresult.8                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ a1cresultNone                  <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ a1cresultNorm                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ metforminNo                    <dbl> 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1...
## $ metforminSteady                <dbl> 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0...
## $ metforminUp                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ repaglinideNo                  <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ repaglinideSteady              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ repaglinideUp                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ nateglinideNo                  <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ nateglinideSteady              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ nateglinideUp                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ chlorpropamideNo               <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ chlorpropamideSteady           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ chlorpropamideUp               <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glimepirideNo                  <dbl> 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1...
## $ glimepirideSteady              <dbl> 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0...
## $ glimepirideUp                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ acetohexamideSteady            <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glipizideNo                    <dbl> 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1...
## $ glipizideSteady                <dbl> 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0...
## $ glipizideUp                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glyburideNo                    <dbl> 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1...
## $ glyburideSteady                <dbl> 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0...
## $ glyburideUp                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ tolbutamideSteady              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ pioglitazoneNo                 <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ pioglitazoneSteady             <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ pioglitazoneUp                 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ rosiglitazoneNo                <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1...
## $ rosiglitazoneSteady            <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0...
## $ rosiglitazoneUp                <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ acarboseNo                     <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ acarboseSteady                 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ acarboseUp                     <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ miglitolNo                     <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ miglitolSteady                 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ miglitolUp                     <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ troglitazoneSteady             <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ tolazamideSteady               <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ tolazamideUp                   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ insulinNo                      <dbl> 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0...
## $ insulinSteady                  <dbl> 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1...
## $ insulinUp                      <dbl> 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0...
## $ glyburide_metforminNo          <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ glyburide_metforminSteady      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glyburide_metforminUp          <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glipizide_metforminSteady      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glimepiride_pioglitazoneSteady <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ metformin_rosiglitazoneSteady  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ metformin_pioglitazoneSteady   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ changeNo                       <dbl> 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1...
## $ diabetesmedYes                 <dbl> 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ age_mid                        <dbl> 5, 15, 25, 35, 45, 55, 65, 75, ...
## $ target                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...

This reduces our variables or features from 95 to 69 and importantly avoids perfect collinearity in the features. In essence perfect collinearity is bad as you are increasing the number of parameters in the model without accrueing any extra information. It also invalidates some of the assumptions for linear regression and least squares estimation.

Almost there! Just need to scale some of our variables between zero and one so they don’t have excessive effect on our model (bigger numbers would have stronger effect on predicting the response variable).

# remove NA, as only 3
# sum(!complete.cases(trsf))
 
# create function to scale
range01 <- function(x, ...) {
  (x - min(x, ...)) / (max(x, ...) - min(x, ...))
}
 
diab_scaled <-  trsf %>%
  na.omit() %>% 
  purrr::map_df(., range01)
 
glimpse(diab_scaled)
## Observations: 101,763
## Variables: 70
## $ genderMale                     <dbl> 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0...
## $ admission_type_id              <dbl> 0.7142857, 0.0000000, 0.0000000...
## $ discharge_disposition_id       <dbl> 0.88888889, 0.00000000, 0.00000...
## $ admission_source_id            <dbl> 0.00000000, 0.25000000, 0.25000...
## $ time_in_hospital               <dbl> 0.00000000, 0.15384615, 0.07692...
## $ num_lab_procedures             <dbl> 0.30534351, 0.44274809, 0.07633...
## $ num_procedures                 <dbl> 0.0000000, 0.0000000, 0.8333333...
## $ num_medications                <dbl> 0.0000, 0.2125, 0.1500, 0.1875,...
## $ number_outpatient              <dbl> 0.00000000, 0.00000000, 0.04761...
## $ number_emergency               <dbl> 0.00000000, 0.00000000, 0.00000...
## $ number_inpatient               <dbl> 0.00000000, 0.00000000, 0.04761...
## $ number_diagnoses               <dbl> 0.0000000, 0.5333333, 0.3333333...
## $ max_glu_serum.300              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ max_glu_serumNone              <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ max_glu_serumNorm              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ a1cresult.8                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ a1cresultNone                  <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ a1cresultNorm                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ metforminNo                    <dbl> 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1...
## $ metforminSteady                <dbl> 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0...
## $ metforminUp                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ repaglinideNo                  <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ repaglinideSteady              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ repaglinideUp                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ nateglinideNo                  <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ nateglinideSteady              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ nateglinideUp                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ chlorpropamideNo               <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ chlorpropamideSteady           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ chlorpropamideUp               <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glimepirideNo                  <dbl> 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1...
## $ glimepirideSteady              <dbl> 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0...
## $ glimepirideUp                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ acetohexamideSteady            <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glipizideNo                    <dbl> 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1...
## $ glipizideSteady                <dbl> 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0...
## $ glipizideUp                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glyburideNo                    <dbl> 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1...
## $ glyburideSteady                <dbl> 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0...
## $ glyburideUp                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ tolbutamideSteady              <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ pioglitazoneNo                 <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ pioglitazoneSteady             <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ pioglitazoneUp                 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ rosiglitazoneNo                <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1...
## $ rosiglitazoneSteady            <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0...
## $ rosiglitazoneUp                <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ acarboseNo                     <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ acarboseSteady                 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ acarboseUp                     <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ miglitolNo                     <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ miglitolSteady                 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ miglitolUp                     <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ troglitazoneSteady             <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ tolazamideSteady               <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ tolazamideUp                   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ insulinNo                      <dbl> 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0...
## $ insulinSteady                  <dbl> 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1...
## $ insulinUp                      <dbl> 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0...
## $ glyburide_metforminNo          <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ glyburide_metforminSteady      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glyburide_metforminUp          <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glipizide_metforminSteady      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ glimepiride_pioglitazoneSteady <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ metformin_rosiglitazoneSteady  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ metformin_pioglitazoneSteady   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ changeNo                       <dbl> 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1...
## $ diabetesmedYes                 <dbl> 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ age_mid                        <dbl> 0.0000000, 0.1111111, 0.2222222...
## $ target                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...

That was a bit long winded but we’ve scaled all our variables and removed some of the uninformative and unhelpful variables (some of the diag variables could be useful, it would be nice if we could link this dataset or fill in the NAs somehow, anyhow we proceed with what we have).

The Keras Workflow

As in the MNIST exampler we can follow a workflow:

  • Define your training data: input tensors and target tensors.
  • Define a network of layers (or model) that maps your inputs to your targets.
  • Configure the learning process by choosing a loss function, an optimizer, and some metrics to monitor.
  • Iterate on your training data by calling the fit() method of your model.

Defining the training data

Prior to this we inspect the data and notice it is imbalanced (we can revisit this problem later, we continue as is).

table(diab_scaled$target)
## 
##     0     1 
## 90406 11357

Now we need to get our training data and our labels into the correct format. Let’s remind ourselves of what the MNIST data structure was (we probably want tensors right!?).

str(train_images)
##  num [1:60000, 1:784] 0 0 0 0 0 0 0 0 0 0 ...
str(train_labels)
##  num [1:60000, 1:10] 0 1 0 0 0 0 0 0 0 0 ...

The train_images were scaled pixel scores 28 by 28, or a 784 pixels in total for each sample. There were 60, 000 samples or rows. The train_labels were correspondingly 60, 000 samples. The caret package and the CreateDataPartition is a better approach but we use this here for simplicity.

## 75% of the sample size
smp_size <- floor(0.75 * nrow(diab_scaled))
 
## set the seed to make your partition reproductible
set.seed(1337)
train_ind <- sample(seq_len(nrow(diab_scaled)), size = smp_size)
 
# create index
train <- diab_scaled[train_ind, ]
test <- diab_scaled[-train_ind, ]
 
 
train_data <- dplyr::select(train, -target)
train_labels <- train$target
test_data <- dplyr::select(test, -target)
test_labels <- test$target

Currently our data is not in the form of tensors. You can’t feed lists of integers into a neural network so we better do something about it. We also need to convert our labels from integer to numeric and transform our target attribute from a vector that contains values for each class value to a matrix with a boolean for each class value and whether or not a given instance has that class value or not.

# https://www.datacamp.com/community/tutorials/keras-r-deep-learning#prep
x_train <- as.matrix(train_data)
dimnames(x_train) <- NULL
 
x_test <- as.matrix(test_data)
dimnames(x_test) <- NULL
 
y_train <- keras::to_categorical(as.numeric(train_labels))
y_test <- keras::to_categorical(as.numeric(test_labels)) 
 
head(y_train)
##      [,1] [,2]
## [1,]    1    0
## [2,]    1    0
## [3,]    1    0
## [4,]    1    0
## [5,]    1    0
## [6,]    1    0

Now we have reached the end of the exploration and preprocessing steps for the hospital data, although we caveat that with we flew through this process and did not apply any expert knowledge nor did we expect the data as closely as we should have, this blog is meant to be illustrative of using keras. We can now go on to building our neural network with keras!

Define the network architecture

Can we predict which patients will be readmitted to hospital within 30 days of discharge? The input data is vectors, and the labels are scalars (1s and 0s): this is the easiest setup we’ll ever encounter (compared to image, video or time series data). For this problem we probably want to use a fully-connected multi layer perceptron (not just because it sounds cool!), as it should perform well for this type of problem.

model <- keras_model_sequential() %>%
  layer_dense(units = 10, activation = "relu", input_shape = c(69)) %>%
  layer_dense(units = 2, activation = "softmax")

Note how the output layer creates 2 output values, one for each readmission class (not, or within 30 days). The first layer, which contains a somewhat arbritrary 10 hidden nodes (a dimension in the representation space) in the hidden layer, on the other hand, has an input_shape of 69. This is because our x_train has 69 columns.

Having more hidden units (a higher-dimensional representation space) allows your network to learn more-complex representations, but it slows things down and may result in overfitting to training data. To save on computation time, we use a smaller number to start with than we might otherwise for the purposes of this blog.

Having the relu activation function allows our model to learn non-linear transformations of the input data which would restrict our search through the hypothesis space for a decent representation of the data. Imagine you had a piece of blue paper mingled with a yellow piece, not using an activation function (the dense layer would consist of two linear operations) would be like trying to untangle the two pieces of paper with just your forehead.

The output layer has a sigmoid activation function so we can squash our output to resemble a probability, as this fits the use case better. This score between 0 and 1, indicates how likely the sample is to have the target “1”: how likely the patient is to be readmitted within 30 days. This also explains why we have just one output node, we just need one number, the probability.

We can inspect our network:

# Print a summary of a model
summary(model)
## ___________________________________________________________________________
## Layer (type)                     Output Shape                  Param #     
## ===========================================================================
## dense_3 (Dense)                  (None, 10)                    700         
## ___________________________________________________________________________
## dense_4 (Dense)                  (None, 2)                     22          
## ===========================================================================
## Total params: 722
## Trainable params: 722
## Non-trainable params: 0
## ___________________________________________________________________________

Choose loss function, optimiser and metric

Crossentropy has its origins in Information Theory and measures the distance between probability distributions or, in this case, between the ground-truth distribution and your predictions. It’s usually a good go-to for probabilities.

# Compile the model
model %>% compile(
  optimizer = "rmsprop",
  loss = "binary_crossentropy",
  metrics = c("accuracy")
)

We can create a validation set to track our progress as our model trains, thus we can check it on data it has never seen. For simplicity we skip this step here but it is a thing that can be useful.

Training the model

We can now train the model for 5 epochs (5 iterations over all samples in the x_train and y_train tensors), in mini-batches of 512 samples.

history <- model %>% fit(
  x_train,
  y_train,
  epochs = 5,
  batch_size = 512
)

We can plot the accuracy and the loss of the model through training. We load ggplot2 so it looks nicer.

library(ggplot2)
plot(history)

plot of chunk 2018-03-04-validate

You might be pretty hyped by the accuracy but hold your horses! Let’s reflect on how well the null model would perform. What if we just predicted every patient would not be readmitted within 30 days?

# the number for which we were correct divided by total patients
sum(y_test[, 1] == 1) / dim(y_test)[1]
## [1] 0.8878189

Thus our neural network has the same accuracy as this null model! Agh the curse of an imbalanced data set. This highlights an important point, it’s tempting to get caught up in the wizardry of deep learning, but remember it’s not magic, apply your standard thinking to the problem to avoid embaressing outcomes like this (we of course should use a confusion matrix as accuracy is not that useful a measure here but we use it to make the point).

Improving the model

We need to penalise the model somehow for simply predicting the modal class (not readmitted) otherwise we will get stuck at this local minima. There are a few approaches that spring to mind; rebalance the data with SMOTE, customise loss function (see alpha gov), change class weights (easier than customising loss function).

SMOTE

Considering this SMOTE blog, our data should be considered a rare event (as < 15%). The general idea of this method is to artificially generate new examples of the minority class using the nearest neighbors of these cases.

print(prop.table(table(diab_scaled$target)))
## 
##         0         1 
## 0.8883976 0.1116024

We go back to our dataframe version to make handling easier. The one where everything was scaled. We randomly split the data

set.seed(255)
splitIndex <- createDataPartition(diab_scaled$target, p = .50,
                                  list = FALSE,
                                  times = 1)
trainSplit <- diab_scaled[splitIndex,]
testSplit <- diab_scaled[-splitIndex,]
 
prop.table(table(trainSplit$target))
## 
##        0        1 
## 0.889116 0.110884

The outcome balance between both splits is still around 11% therefore representative of the bigger set - we’re in good shape.

Let’s create extra positive observations using SMOTE. We set perc.over = 100 to double the quantity of positive cases, and set perc.under = 200 to keep half of what was created as negative cases.

And I SMOTE his ruin upon the mountain-side. - Gandalf

library(DMwR)
trainSplit$target <- as.factor(trainSplit$target)
trainSplit <- DMwR::SMOTE(target ~ ., trainSplit,
                    perc.over = 100, perc.under = 200)
trainSplit$target <- as.numeric(trainSplit$target)
 
# try using train we have already
train_smote <- train
 
train_smote$target <- as_factor(train_smote$target)
train_smote <- SMOTE(target ~ ., train_smote,
                    perc.over = 100, perc.under = 200,
                    k = 5)
train_smote$target <- as.numeric(train_smote$target)

Fail. Unfortunately this is erroring and a quick debug attempt fails. Let’s try one of our alternatives so we don’t have to mess about with the data again. We leave this code in our blog as we acknowledge it as a strategy worth investigating further.

Custom class weights

Let’s penalise the model for guessing zero everytime, we can do that using the class_weight argument. In essence it makes it more expensive to guess zero each time. From the ?fit and documentation on class_weight it looks like we pass it a list. We force our algorithm to treat every instance of class 1 (readmission within 30 days) as 9 instances of class 0.

history <- model %>% fit(
  x_train,
  y_train,
  epochs = 5,
  batch_size = 512,
  class_weight = list("0" = 1,"1" = 9)
)

This returns a more realistic accuracy as our model is penalised from just guessing “No readmission” for every patient.

Predict labels of test data

# Predict the classes for the test data
classes <- model %>% predict_classes(x_test, batch_size = 128)
 
# convert y_test to vector, same dim as classes
# remember we converted it to a binary matrix before with
# to_categorical
y_test_v <- dplyr::if_else(y_test[, 1] == 1, 0, 1, missing = NULL)
 
# Confusion matrix
confusionMatrix(data = classes, reference = y_test_v)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction     0     1
##          0 14043  1326
##          1  8544  1528
##                                         
##                Accuracy : 0.612         
##                  95% CI : (0.606, 0.618)
##     No Information Rate : 0.8878        
##     P-Value [Acc > NIR] : 1             
##                                         
##                   Kappa : 0.0746        
##  Mcnemar's Test P-Value : <2e-16        
##                                         
##             Sensitivity : 0.6217        
##             Specificity : 0.5354        
##          Pos Pred Value : 0.9137        
##          Neg Pred Value : 0.1517        
##              Prevalence : 0.8878        
##          Detection Rate : 0.5520        
##    Detection Prevalence : 0.6041        
##       Balanced Accuracy : 0.5786        
##                                         
##        'Positive' Class : 0             
## 

As you can see the model is performing poorly but at least it isn’t just guessing “No readmission” for every patient. It seems our class_weight specification has swung things the other way, where by perhaps we are being too hard on them model missing patients that are readmitted the model is creating lots of false positives.

We can also generate the likelihood of a patient being readmitted within 30 days. For different samples we are more confident of a given outcome.

model %>% predict(x_test[1:10,])
##            [,1]      [,2]
##  [1,] 0.5513239 0.4486761
##  [2,] 0.4845218 0.5154781
##  [3,] 0.5492703 0.4507296
##  [4,] 0.4040865 0.5959135
##  [5,] 0.5623027 0.4376973
##  [6,] 0.4599133 0.5400867
##  [7,] 0.6352720 0.3647280
##  [8,] 0.4240120 0.5759881
##  [9,] 0.4611554 0.5388445
## [10,] 0.6284453 0.3715546

The model seems to be mostly unsure with likelihoods hovering around 50:50.

Model evaluation

# Evaluate on test data and labels
score <- model %>% evaluate(x_test, y_test, batch_size = 128)
 
# Print the score
print(score)
## $loss
## [1] 0.6500477
## 
## $acc
## [1] 0.6120436

This model is not particularly useful it needs some work.

Experimentation and model fine tuning

As you explore deep learning don’t be dissapointed that things weren’t as easy as with the MNIST data set. We need to work for our improvements in our model, we have a few options.

  • Change the number of hidden layers
  • Adjust the number of nodes in each hidden layer

Given our data set isn’t too small this might be an OK approach (overfitting might be less of a problem than with a small data problem).

Let’s add another layer (we should change one thing at a time so we can see the impact on accuracy). We create another pre-processing layer prior to our 10 unit hidden layer we had previously. It will take longer to run but we might uncover more complicated features within the data that we might of missed before (or we might overfit).

model <- keras_model_sequential() %>%
  layer_dense(units = 30, activation = "relu", input_shape = c(69)) %>%
  layer_dense(units = 10, activation = "relu") %>%
  layer_dense(units = 2, activation = "softmax")
 
# Compile the model
model %>% compile(
  optimizer = "rmsprop",
  loss = "binary_crossentropy",
  metrics = c("accuracy")
)
 
history <- model %>% fit(
  x_train,
  y_train,
  epochs = 10,
  batch_size = 512,
  class_weight = list("0" = 1,"1" = 9)
)
 
# Evaluate on test data and labels
score <- model %>% evaluate(x_test, y_test, batch_size = 128)
# Print the score
print(score)
## $loss
## [1] 0.8878986
## 
## $acc
## [1] 0.2474745
# Predict the classes for the test data
classes <- model %>% predict_classes(x_test, batch_size = 128)
# Confusion matrix
confusionMatrix(classes, y_test_v)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction     0     1
##          0  3668   226
##          1 18919  2628
##                                           
##                Accuracy : 0.2475          
##                  95% CI : (0.2422, 0.2528)
##     No Information Rate : 0.8878          
##     P-Value [Acc > NIR] : 1               
##                                           
##                   Kappa : 0.0216          
##  Mcnemar's Test P-Value : <2e-16          
##                                           
##             Sensitivity : 0.1624          
##             Specificity : 0.9208          
##          Pos Pred Value : 0.9420          
##          Neg Pred Value : 0.1220          
##              Prevalence : 0.8878          
##          Detection Rate : 0.1442          
##    Detection Prevalence : 0.1531          
##       Balanced Accuracy : 0.5416          
##                                           
##        'Positive' Class : 0               
## 

We perform better on the test data this time round with a 20% improvement compared to the simpler network. However we are still missing two thirds of the readmitted patients and creating more work for ourselves by flagging non-readmitted patients as a readmission risk four times more than the real risk ones.

Twisting the knobs

Not bad for a first go and there are other options which include:

  • Fine tune optimiser
  • Reduce the learning rate
  • Adjusting the batch site and number of epochs

Saving your model

You can save your model using the save_model_hdf5() function as well as a few other options.

Learning from others

This field moves fast and its important to consult the literature for good setups. For example a recent paper looked at this kind of problem, let’s use their method and model architecture (Jamie et al., 2017). Read the model training and evaluation section from this open PLOS ONE paper for a description of the design decisions taking and why.

We can combine it with our fine tuning we used earlier as this is a different data set and problem.

set.seed(1337)
model <- keras_model_sequential() %>%
  # one dense laye rhalf the nodes of the input
  # drop out layer between each layer
  layer_dense(units = 35, activation = "relu",
              input_shape = c(69)) %>%
  layer_dropout(rate = 1) %>%
  layer_dense(units = 5, activation = "relu") %>%
  layer_dropout(rate = 1) %>%
  layer_dense(units = 2, activation = "softmax")
 
# Compile the model
# adam optimiser
model %>% compile(
  optimizer = "adam",
  loss = "binary_crossentropy",
  metrics = c("accuracy")
)
 
# epoch and batch size changed
# method didnt specify class weight
history <- model %>% fit(
  x_train,
  y_train,
  epochs = 5,
  batch_size = 64,
  class_weight = list("0" = 1,"1" = 9)
)
 
# Evaluate on test data and labels
score <- model %>% evaluate(x_test, y_test, batch_size = 128)
# Print the score
print(score)
## $loss
## [1] 0.6315034
## 
## $acc
## [1] 0.6907747
# Predict the classes for the test data
classes <- model %>% predict_classes(x_test, batch_size = 128)
# Confusion matrix
confusionMatrix(classes, y_test_v)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction     0     1
##          0 16222  1502
##          1  6365  1352
##                                           
##                Accuracy : 0.6908          
##                  95% CI : (0.6851, 0.6965)
##     No Information Rate : 0.8878          
##     P-Value [Acc > NIR] : 1               
##                                           
##                   Kappa : 0.11            
##  Mcnemar's Test P-Value : <2e-16          
##                                           
##             Sensitivity : 0.7182          
##             Specificity : 0.4737          
##          Pos Pred Value : 0.9153          
##          Neg Pred Value : 0.1752          
##              Prevalence : 0.8878          
##          Detection Rate : 0.6376          
##    Detection Prevalence : 0.6967          
##       Balanced Accuracy : 0.5960          
##                                           
##        'Positive' Class : 0               
## 

This compares to the aforementioned paper’s sensitivity score of 59% for neural network and 50% for traditional hospital risk modelling methods in the United States.

Multiclass modelling

Using the same data we could see how we perform in predicting “No readmission”, less than 30 days readmision or greater than 30 days readmission. This would require a different loss function (categorical crossentropy). We leave this to you, the reader to explore.

Conclusion

Keras provides a simple and powerful LEGO block approach to building a neural network. In this blog we demonstrated how to go from a reasonably clean patient data set to a model that predicted the readmission rates of those patients albeit poorly.

References

  • Strack, B., Deshazo, J. P., Gennings, C., Olmo, J. L., Ventura, S., Cios, K. J., & Clore, J. N. (2014). Impact of HbA1c Measurement on Hospital Readmission Rates : Analysis of 70 , 000 Clinical Database Patient Records. Biomed Research International, 2014, 1–11. https://doi.org/10.1155/2014/781670
  • Billings, J., Blunt, I., Steventon, A., Georghiou, T., Lewis, G., & Bardsley, M. (2012). Development of a predictive model to identify inpatients at risk of re-admission within 30 days of discharge ( PARR-30 ). BMJ Open, 1–10. https://doi.org/10.1136/bmjopen-2012-001667
  • Jamei, M., Nisnevich, A., Wetchler, E., Sudat, S., & Liu, E. (2017). Predicting all-cause risk of 30-day hospital readmission using artificial neural networks. PLoS ONE, 12(7), 1–14. https://doi.org/10.1371/journal.pone.0181173
sessionInfo()
## R version 3.4.3 (2017-11-30)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: OS X El Capitan 10.11.6
## 
## Matrix products: default
## BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] caret_6.0-78     lattice_0.20-35  tidyr_0.7.2      openxlsx_4.0.17 
##  [5] readxl_1.0.0     here_0.1         tidyxl_1.0.0     purrr_0.2.4     
##  [9] e1071_1.6-8      Rtsne_0.13       dplyr_0.7.4      bindrcpp_0.2    
## [13] bayesAB_1.1.0    Matrix_1.2-12    ggplot2_2.2.1    simecol_0.8-10  
## [17] deSolve_1.20     keras_2.1.4.9000
## 
## loaded via a namespace (and not attached):
##  [1] nlme_3.1-131        lubridate_1.7.1     dimRed_0.1.0       
##  [4] rprojroot_1.2       tools_3.4.3         backports_1.1.1    
##  [7] R6_2.2.2            rpart_4.1-11        lazyeval_0.2.1     
## [10] colorspace_1.3-2    nnet_7.3-12         withr_2.1.0        
## [13] tidyselect_0.2.4    mnormt_1.5-5        compiler_3.4.3     
## [16] TSP_1.1-5           labeling_0.3        rmd2md_0.1.1       
## [19] scales_0.5.0        sfsmisc_1.1-1       DEoptimR_1.0-8     
## [22] hexbin_1.27.1       psych_1.7.8         robustbase_0.92-8  
## [25] readr_1.1.1         tfruns_1.3          stringr_1.2.0      
## [28] digest_0.6.12       foreign_0.8-69      minqa_1.2.4        
## [31] base64enc_0.1-3     pkgconfig_2.0.1     highr_0.6          
## [34] rlang_0.2.0         ddalpha_1.3.1       bindr_0.1          
## [37] extracat_1.7-4      jsonlite_1.5        tensorflow_1.5     
## [40] ModelMetrics_1.1.0  magrittr_1.5        Rcpp_0.12.15       
## [43] munsell_0.4.3       reticulate_1.5      stringi_1.1.6      
## [46] whisker_0.3-2       yaml_2.1.17         MASS_7.3-47        
## [49] plyr_1.8.4          recipes_0.1.1       grid_3.4.3         
## [52] parallel_3.4.3      splines_3.4.3       hms_0.4.0          
## [55] zeallot_0.1.0       knitr_1.18          stats4_3.4.3       
## [58] reshape2_1.4.3      codetools_0.2-15    CVST_0.2-1         
## [61] glue_1.2.0          evaluate_0.10.1     data.table_1.10.4-3
## [64] foreach_1.4.3       cellranger_1.1.0    gtable_0.2.0       
## [67] kernlab_0.9-25      assertthat_0.2.0    DRR_0.0.2          
## [70] gower_0.1.2         janitor_0.3.1       prodlim_1.6.1      
## [73] broom_0.4.3         class_7.3-14        survival_2.41-3    
## [76] timeDate_3042.101   RcppRoll_0.2.2      tibble_1.3.4       
## [79] iterators_1.0.8     lava_1.5.1          ipred_0.9-6