get_default_training_options {MOFA2} | R Documentation |
Function to obtain the default training options.
get_default_training_options(object)
object |
an untrained |
This function provides a default set of training options that can be modified and passed to the MOFA
object
in the prepare_mofa
step (see example), i.e. after creating a MOFA
object
(using create_mofa
) and before starting the training (using run_mofa
)
The training options are the following:
maxiter: numeric value indicating the maximum number of iterations. Default is 1000. Convergence is assessed using the ELBO statistic.
drop_factor_threshold: numeric indicating the threshold on fraction of variance explained to consider a factor inactive and drop it from the model. For example, a value of 0.01 implies that factors explaining less than 1% of variance (in each view) will be dropped. Default is -1 (no dropping of factors)
convergence_mode: character indicating the convergence criteria, either "slow", "medium" or "fast", corresponding to 5e-7%, 5e-6% or 5e-5% deltaELBO change w.r.t. to the ELBO at the first iteration.
verbose: logical indicating whether to generate a verbose output.
startELBO: integer indicating the first iteration to compute the ELBO (default is 1).
freqELBO: integer indicating the first iteration to compute the ELBO (default is 1).
stochastic: logical indicating whether to use stochastic variational inference (only required for very big data sets, default is FALSE
).
gpu_mode: logical indicating whether to use GPUs (see details).
seed: numeric indicating the seed for reproducibility (default is 42).
Returns a list with default training options
# Using an existing simulated data with two groups and two views file <- system.file("extdata", "test_data.RData", package = "MOFA2") # Load data dt (in data.frame format) load(file) # Create the MOFA object MOFAmodel <- create_mofa(dt) # Load default training options train_opts <- get_default_training_options(MOFAmodel) # Edit some of the training options train_opts$convergence_mode <- "medium" train_opts$startELBO <- 100 train_opts$seed <- 42 # Prepare the MOFA object MOFAmodel <- prepare_mofa(MOFAmodel, training_options = train_opts)