Training non default regression models

Etienne Becht

June 2020

Introduction

This vignette explains how to specify non-default machine learning frameworks and their hyperparameters when applying Infinity Flow. We will assume here that the basic usage of Infinity Flow has already been read, if you are not familiar with this material I suggest you first look at the basic usage vignette

This vignette will cover:

  1. Loading the example data
  2. Note on package design
  3. The regression_functions argument
  4. The extra_args_regression_params argument
  5. Neural networks

Loading the example data

Here is a single R code chunk that recapitulates all of the data preparation covered in the basic usage vignette.

if(!require(devtools)){
    install.packages("devtools")
}
if(!require(infinityFlow)){
    library(devtools)
    install_github("ebecht/infinityFlow")
}
library(infinityFlow)

data(steady_state_lung)
data(steady_state_lung_annotation)
data(steady_state_lung_backbone_specification)

dir <- file.path(tempdir(), "infinity_flow_example")
input_dir <- file.path(dir, "fcs")
write.flowSet(steady_state_lung, outdir = input_dir)
#> [1] "/home/biocbuild/bbs-3.20-bioc/tmpdir/Rtmp77ml3k/infinity_flow_example/fcs"

write.csv(steady_state_lung_backbone_specification, file = file.path(dir, "backbone_selection_file.csv"), row.names = FALSE)

path_to_fcs <- file.path(dir, "fcs")
path_to_output <- file.path(dir, "output")
path_to_intermediary_results <- file.path(dir, "tmp")
backbone_selection_file <- file.path(dir, "backbone_selection_file.csv")

targets <- steady_state_lung_annotation$Infinity_target
names(targets) <- rownames(steady_state_lung_annotation)
isotypes <- steady_state_lung_annotation$Infinity_isotype
names(isotypes) <- rownames(steady_state_lung_annotation)

input_events_downsampling <- 1000
prediction_events_downsampling <- 500
cores = 1L

Note on package design

The infinity_flow() function which encapsulates the complete Infinity Flow computational pipeline uses two arguments to respectively select regression models and their hyperparameters. These two arguments are both lists, and should have the same length. The idea is that the first list, regression_functions will be a list of model templates (XGBoost, Neural Networks, SVMs…) to train, while the second will be used to specify their hyperparameters. The list of templates is then fit to the data using parallel computing with socketing (using the parallel package through the pbapply package), which is more memory efficient.

The regression_functions argument

This argument is a list of functions which specifies how many models to train per well and which ones. Each type of machine learning model is supported through a wrapper in the infinityFlow package, and has a name of the form fitter_*. See below for the complete list:

print(grep("fitter_", ls("package:infinityFlow"), value = TRUE))
#> [1] "fitter_glmnet"  "fitter_linear"  "fitter_nn"      "fitter_svm"    
#> [5] "fitter_xgboost"
fitter_ function Backend Model type
fitter_xgboost XGBoost Gradient boosted trees
fitter_nn Tensorflow/Keras Neural networks
fitter_svm e1071 Support vector machines
fitter_glmnet glmnet Generalized linear and polynomial models
fitter_lm stats Linear and polynomial models

These functions rely on optional package dependencies (so that you do not need to install e.g. Keras if you are not planning to use it). We need to make sure that these dependencies are however met:

       optional_dependencies <- c("glmnetUtils", "e1071")
       unmet_dependencies <- setdiff(optional_dependencies, rownames(installed.packages()))
       if(length(unmet_dependencies) > 0){
           install.packages(unmet_dependencies)
       }
       for(pkg in optional_dependencies){
           library(pkg, character.only = TRUE)
       }

In this vignette we will train all of these models. Note that if you do it on your own data, it make take quite a bit of memory (remember that the output expression matrix will be a numeric matrix of size (prediction_events_downsampling x number of wells) rows x (number of wells x number of models).

To train multiple models we create a list of these fitter_* functions and assign this to the regression_functions argument that will be fed to the infinity_flow function. The names of this list will be used to name your models.

regression_functions <- list(
    XGBoost = fitter_xgboost, # XGBoost
    SVM = fitter_svm, # SVM
    LASSO2 = fitter_glmnet, # L1-penalized 2nd degree polynomial model
    LM = fitter_linear # Linear model
)

The extra_args_regression_params argument

This argument is a list of list (so of the form list(list(...), list(...), etc.)) of length length(regression_functions). Each element of the extra_args_regression_params object is thus a list. This lower-level list will be used to pass named arguments to the machine learning fitting function. The list of extra_args_regression_params is matched with the list of machine learning models regression_functions using the order of the elements in these two lists (e.g. the first regression model is matched with the first element of the list of arguments, then the seconds elements are matched together, etc…).

backbone_size <- table(read.csv(backbone_selection_file)[,"type"])["backbone"]
extra_args_regression_params <- list(
     ## Passed to the first element of `regression_functions`, e.g. XGBoost. See ?xgboost for which parameters can be passed through this list
    list(nrounds = 500, eta = 0.05),

    # ## Passed to the second element of `regression_functions`, e.g. neural networks through keras::fit. See https://keras.rstudio.com/articles/tutorial_basic_regression.html
    # list(
    #         object = { ## Specifies the network's architecture, loss function and optimization method
    #             model = keras_model_sequential()
    #             model %>%
    #                 layer_dense(units = backbone_size, activation = "relu", input_shape = backbone_size) %>%
    #                 layer_dense(units = backbone_size, activation = "relu", input_shape = backbone_size) %>%
    #                 layer_dense(units = 1, activation = "linear")
    #             model %>%
    #                 compile(loss = "mean_squared_error", optimizer = optimizer_sgd(lr = 0.005))
    #             serialize_model(model)
    #         },
    #         epochs = 1000, ## Number of maximum training epochs. The training is however stopped early if the loss on the validation set does not improve for 20 epochs. This early stopping is hardcoded in fitter_nn.
    #         validation_split = 0.2, ## Fraction of the training data used to monitor validation loss
    #         verbose = 0,
    #         batch_size = 128 ## Size of the minibatches for training.
    # ),

    # Passed to the third element, SVMs. See help(svm, "e1071") for possible arguments
    list(type = "nu-regression", cost = 8, nu=0.5, kernel="radial"),

    # Passed to the fourth element, fitter_glmnet. This should contain a mandatory argument `degree` which specifies the degree of the polynomial model (1 for linear, 2 for quadratic etc...). Here we use degree = 2 corresponding to our LASSO2 model Other arguments are passed to getS3method("cv.glmnet", "formula"),
    list(alpha = 1, nfolds=10, degree = 2),

    # Passed to the fourth element, fitter_linear. This only accepts a degree argument specifying the degree of the polynomial model. Here we use degree = 1 corresponding to a linear model.
    list(degree = 1)
)

We can now run the pipeline with these custom arguments to train all the models.

if(length(regression_functions) != length(extra_args_regression_params)){
    stop("Number of models and number of lists of hyperparameters mismatch")
}
imputed_data <- infinity_flow(
    regression_functions = regression_functions,
    extra_args_regression_params = extra_args_regression_params,
    path_to_fcs = path_to_fcs,
    path_to_output = path_to_output,
    path_to_intermediary_results = path_to_intermediary_results,
    backbone_selection_file = backbone_selection_file,
    annotation = targets,
    isotype = isotypes,
    input_events_downsampling = input_events_downsampling,
    prediction_events_downsampling = prediction_events_downsampling,
    verbose = TRUE,
    cores = cores
)
#> Using directories...
#>  input: /home/biocbuild/bbs-3.20-bioc/tmpdir/Rtmp77ml3k/infinity_flow_example/fcs
#>  intermediary: /home/biocbuild/bbs-3.20-bioc/tmpdir/Rtmp77ml3k/infinity_flow_example/tmp
#>  subset: /home/biocbuild/bbs-3.20-bioc/tmpdir/Rtmp77ml3k/infinity_flow_example/tmp/subsetted_fcs
#>  rds: /home/biocbuild/bbs-3.20-bioc/tmpdir/Rtmp77ml3k/infinity_flow_example/tmp/rds
#>  annotation: /home/biocbuild/bbs-3.20-bioc/tmpdir/Rtmp77ml3k/infinity_flow_example/tmp/annotation.csv
#>  output: /home/biocbuild/bbs-3.20-bioc/tmpdir/Rtmp77ml3k/infinity_flow_example/output
#> Parsing and subsampling input data
#>  Downsampling to 1000 events per input file
#>  Concatenating expression matrices
#>  Writing to disk
#> Logicle-transforming the data
#>  Backbone data
#>  Exploratory data
#>  Writing to disk
#>  Transforming expression matrix
#>  Writing to disk
#> Harmonizing backbone data
#>  Scaling expression matrices
#>  Writing to disk
#> Fitting regression models
#>  Randomly selecting 50% of the subsetted input files to fit models
#>  Fitting...
#>      XGBoost
#> 
#>  10.30672 seconds
#>      SVM
#> 
#>  1.513759 seconds
#>      LASSO2
#> 
#>  7.205616 seconds
#>      LM
#> 
#>  0.07789373 seconds
#> Imputing missing measurements
#>  Randomly drawing events to predict from the test set
#>  Imputing...
#>      XGBoost
#> 
#>  0.9361365 seconds
#>      SVM
#> 
#>  1.070411 seconds
#>      LASSO2
#> 
#>  1.32601 seconds
#>      LM
#> 
#>  0.04851532 seconds
#>  Concatenating predictions
#>  Writing to disk
#> Performing dimensionality reduction
#> 21:16:00 UMAP embedding parameters a = 1.262 b = 1.003
#> 21:16:00 Read 5000 rows and found 17 numeric columns
#> 21:16:00 Using Annoy for neighbor search, n_neighbors = 15
#> 21:16:00 Building Annoy index with metric = euclidean, n_trees = 50
#> 0%   10   20   30   40   50   60   70   80   90   100%
#> [----|----|----|----|----|----|----|----|----|----|
#> **************************************************|
#> 21:16:00 Writing NN index file to temp file /home/biocbuild/bbs-3.20-bioc/tmpdir/Rtmp77ml3k/file28f8376bc14fa
#> 21:16:00 Searching Annoy index using 1 thread, search_k = 1500
#> 21:16:01 Annoy recall = 100%
#> 21:16:02 Commencing smooth kNN distance calibration using 1 thread with target n_neighbors = 15
#> 21:16:02 Initializing from normalized Laplacian + noise (using irlba)
#> 21:16:02 Commencing optimization for 1000 epochs, with 102420 positive edges using 1 thread
#> 21:16:13 Optimization finished
#> Exporting results
#>  Transforming predictions back to a linear scale
#>  Exporting FCS files (1 per well)
#> Plotting
#>  Chopping off the top and bottom 0.005 quantiles
#>  Shuffling the order of cells (rows)
#>  Producing plot
#> Background correcting
#>  Transforming background-corrected predictions. (Use logarithm to visualize)
#>  Exporting FCS files (1 per well)
#> Plotting
#>  Chopping off the top and bottom 0.005 quantiles
#>  Shuffling the order of cells (rows)
#>  Producing plot

Our model names are appended to the predicted markers in the output. For more discussion about the outputs (including output files written to disk and plots), see the basic usage vignette

   print(imputed_data$bgc[1:2, ])
#>      FSC-A       FSC-H      FSC-W   SSC-A      SSC-H      SSC-W CD69-CD301b
#> 3 49252.20 -0.06091381 -0.8078054 1780.66 -1.2859263 -0.9602120   -0.034116
#> 5 32859.87 -2.11285859  1.1711559 8654.45  0.8238664  0.7161082   -1.976388
#>      Zombie      MHCII         CD4      CD44        CD8      CD11c       CD11b
#> 3 -13.60832 -0.6493344  0.02204841 -1.053567 -0.6539105 0.05967517 -0.08511018
#> 5 299.31223  0.3218636 -2.23377590  1.088059 -1.6104266 0.16279209  1.78583676
#>         F480     Ly6C      Lineage  CD45a488 FJComp-PE(yg)-A       CD24
#> 3 -0.8566221 0.692776  0.008099969 0.3556623       0.8701703 -1.8416419
#> 5  1.3765564 1.813770 -2.773666361 0.7809394       1.1269006  0.1428919
#>       CD103     Time CD137.LASSO2_bgc CD137.LM_bgc CD137.SVM_bgc
#> 3 0.2928121 3417.409      0.005656924   0.08381381    -0.4565596
#> 5 1.8409255 2817.607      0.607539235   0.11863212     0.1137202
#>   CD137.XGBoost_bgc CD28.LASSO2_bgc CD28.LM_bgc CD28.SVM_bgc CD28.XGBoost_bgc
#> 3        -0.2841849      0.05911761  0.05845164   -0.3799646      -0.03474319
#> 5         0.4299413      0.54892115 -0.04681579    0.5370229       0.15541957
#>   CD49b(pan-NK).LASSO2_bgc CD49b(pan-NK).LM_bgc CD49b(pan-NK).SVM_bgc
#> 3                0.5374138            0.1264110            1.57515680
#> 5               -0.7390846            0.1428431           -0.04174323
#>   CD49b(pan-NK).XGBoost_bgc KLRG1.LASSO2_bgc KLRG1.LM_bgc KLRG1.SVM_bgc
#> 3                 1.2867283        0.7323459   0.33431545     0.6963470
#> 5                -0.1930101        0.7200192  -0.08479658     0.5624959
#>   KLRG1.XGBoost_bgc Ly-49c/F/I/H.LASSO2_bgc Ly-49c/F/I/H.LM_bgc
#> 3        1.01164930             -0.06686502          0.02940654
#> 5        0.09828812              0.47593210         -0.41282590
#>   Ly-49c/F/I/H.SVM_bgc Ly-49c/F/I/H.XGBoost_bgc Podoplanin.LASSO2_bgc
#> 3           -0.5466389                0.0528543            -0.3394822
#> 5           -0.3960458                0.1899634            -0.4316654
#>   Podoplanin.LM_bgc Podoplanin.SVM_bgc Podoplanin.XGBoost_bgc SHIgG.LASSO2_bgc
#> 3       -0.08241831         -0.8776048              0.1638366    -9.740645e-17
#> 5       -0.32185973         -0.1353776             -0.3920497     5.960279e-17
#>    SHIgG.LM_bgc SHIgG.SVM_bgc SHIgG.XGBoost_bgc SSEA-3.LASSO2_bgc SSEA-3.LM_bgc
#> 3  5.626396e-16 -7.339967e-17      -8.09053e-16        -0.1025600    -0.1336799
#> 5 -2.224067e-16 -2.304089e-16       4.47021e-16         0.1377444     0.0388616
#>   SSEA-3.SVM_bgc SSEA-3.XGBoost_bgc TCR Vg3.LASSO2_bgc TCR Vg3.LM_bgc
#> 3     -0.7571429         -0.4985497         0.02107899    -0.02263371
#> 5      0.2871030          0.5068531        -0.32798474    -0.05855131
#>   TCR Vg3.SVM_bgc TCR Vg3.XGBoost_bgc rIgM.LASSO2_bgc  rIgM.LM_bgc rIgM.SVM_bgc
#> 3      -0.2454745          0.03962381   -2.246143e-16 8.940419e-17 1.148458e-15
#> 5       1.0846197          0.18345066    8.940419e-17 8.940419e-17 8.344391e-16
#>   rIgM.XGBoost_bgc    UMAP1    UMAP2 PE_id
#> 3    -6.520438e-16 788.8344 446.3685     1
#> 5    -8.090530e-16 554.0692 219.9039     1

Neural networks

Neural networks won’t build in knitr for me but here is an example of the syntax if you want to use them.

Note: there is an issue with serialization of the neural networks and socketing since I updated to R-4.0.1. If you want to use neural networks, please make sure to set

cores = 1L
optional_dependencies <- c("keras", "tensorflow")
unmet_dependencies <- setdiff(optional_dependencies, rownames(installed.packages()))
if(length(unmet_dependencies) > 0){
     install.packages(unmet_dependencies)
}
for(pkg in optional_dependencies){
    library(pkg, character.only = TRUE)
}

invisible(eval(try(keras_model_sequential()))) ## avoids conflicts with flowCore...

if(!is_keras_available()){
     install_keras() ## Instal keras unsing the R interface - can take a while
}

if (!requireNamespace("BiocManager", quietly = TRUE)){
    install.packages("BiocManager")
}
BiocManager::install("infinityFlow")

library(infinityFlow)

data(steady_state_lung)
data(steady_state_lung_annotation)
data(steady_state_lung_backbone_specification)

dir <- file.path(tempdir(), "infinity_flow_example")
input_dir <- file.path(dir, "fcs")
write.flowSet(steady_state_lung, outdir = input_dir)

write.csv(steady_state_lung_backbone_specification, file = file.path(dir, "backbone_selection_file.csv"), row.names = FALSE)

path_to_fcs <- file.path(dir, "fcs")
path_to_output <- file.path(dir, "output")
path_to_intermediary_results <- file.path(dir, "tmp")
backbone_selection_file <- file.path(dir, "backbone_selection_file.csv")

targets <- steady_state_lung_annotation$Infinity_target
names(targets) <- rownames(steady_state_lung_annotation)
isotypes <- steady_state_lung_annotation$Infinity_isotype
names(isotypes) <- rownames(steady_state_lung_annotation)

input_events_downsampling <- 1000
prediction_events_downsampling <- 500

## Passed to fitter_nn, e.g. neural networks through keras::fit. See https://keras.rstudio.com/articles/tutorial_basic_regression.html
regression_functions <- list(NN = fitter_nn)

backbone_size <- table(read.csv(backbone_selection_file)[,"type"])["backbone"]
extra_args_regression_params <- list(
        list(
        object = { ## Specifies the network's architecture, loss function and optimization method
        model = keras_model_sequential()
        model %>%
        layer_dense(units = backbone_size, activation = "relu", input_shape = backbone_size) %>% 
        layer_dense(units = backbone_size, activation = "relu", input_shape = backbone_size) %>%
        layer_dense(units = 1, activation = "linear")
        model %>%
        compile(loss = "mean_squared_error", optimizer = optimizer_sgd(lr = 0.005))
        serialize_model(model)
        },
        epochs = 1000, ## Number of maximum training epochs. The training is however stopped early if the loss on the validation set does not improve for 20 epochs. This early stopping is hardcoded in fitter_nn.
        validation_split = 0.2, ## Fraction of the training data used to monitor validation loss
        verbose = 0,
        batch_size = 128 ## Size of the minibatches for training.
    )
)

imputed_data <- infinity_flow(
    regression_functions = regression_functions,
    extra_args_regression_params = extra_args_regression_params,
    path_to_fcs = path_to_fcs,
    path_to_output = path_to_output,
    path_to_intermediary_results = path_to_intermediary_results,
    backbone_selection_file = backbone_selection_file,
    annotation = targets,
    isotype = isotypes,
    input_events_downsampling = input_events_downsampling,
    prediction_events_downsampling = prediction_events_downsampling,
    verbose = TRUE,
    cores = 1L
)

Conclusion

Thank you for following this vignette, I hope you made it through the end without too much headache and that it was informative. General questions about proper usage of the package are best asked on the Bioconductor support site to maximize visibility for future users. If you encounter bugs, feel free to raise an issue on infinityFlow’s github.

Information about the R session when this vignette was built

sessionInfo()
#> R version 4.4.1 (2024-06-14)
#> Platform: x86_64-pc-linux-gnu
#> Running under: Ubuntu 24.04.1 LTS
#> 
#> Matrix products: default
#> BLAS:   /home/biocbuild/bbs-3.20-bioc/R/lib/libRblas.so 
#> LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.0
#> 
#> Random number generation:
#>  RNG:     L'Ecuyer-CMRG 
#>  Normal:  Inversion 
#>  Sample:  Rejection 
#>  
#> locale:
#>  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
#>  [3] LC_TIME=en_GB              LC_COLLATE=C              
#>  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
#>  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
#>  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
#> 
#> time zone: America/New_York
#> tzcode source: system (glibc)
#> 
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] e1071_1.7-16        glmnetUtils_1.1.9   infinityFlow_1.16.0
#> [4] flowCore_2.18.0    
#> 
#> loaded via a namespace (and not attached):
#>  [1] sass_0.4.9          generics_0.1.3      class_7.3-22       
#>  [4] gtools_3.9.5        shape_1.4.6.1       lattice_0.22-6     
#>  [7] digest_0.6.37       evaluate_1.0.1      grid_4.4.1         
#> [10] iterators_1.0.14    fastmap_1.2.0       xgboost_1.7.8.1    
#> [13] foreach_1.5.2       jsonlite_1.8.9      Matrix_1.7-1       
#> [16] glmnet_4.1-8        survival_3.7-0      pbapply_1.7-2      
#> [19] codetools_0.2-20    jquerylib_0.1.4     cli_3.6.3          
#> [22] rlang_1.1.4         RProtoBufLib_2.18.0 Biobase_2.66.0     
#> [25] RcppAnnoy_0.0.22    uwot_0.2.2          matlab_1.0.4.1     
#> [28] splines_4.4.1       cachem_1.1.0        yaml_2.3.10        
#> [31] cytolib_2.18.0      tools_4.4.1         raster_3.6-30      
#> [34] parallel_4.4.1      BiocGenerics_0.52.0 R6_2.5.1           
#> [37] png_0.1-8           proxy_0.4-27        matrixStats_1.4.1  
#> [40] stats4_4.4.1        lifecycle_1.0.4     S4Vectors_0.44.0   
#> [43] irlba_2.3.5.1       terra_1.7-83        bslib_0.8.0        
#> [46] data.table_1.16.2   Rcpp_1.0.13         xfun_0.48          
#> [49] knitr_1.48          htmltools_0.5.8.1   rmarkdown_2.28     
#> [52] compiler_4.4.1      sp_2.1-4