ScaledMatrix
classScaledMatrix 1.12.0
The ScaledMatrix
provides yet another method of running scale()
on a matrix.
In other words, these three operations are equivalent:
mat <- matrix(rnorm(10000), ncol=10)
smat1 <- scale(mat)
head(smat1)
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 0.2094656 -1.1540623 -1.7134655 -0.2698460 0.6901343 0.8175238
## [2,] 0.1794885 -0.3075691 1.0462473 0.3394468 -0.1928639 -1.5344863
## [3,] 0.6314098 1.5269588 0.2689313 1.9685887 -1.5726621 -0.2078573
## [4,] 0.7427763 -0.5018108 -0.5435497 0.3974223 0.5529345 2.1062528
## [5,] 0.6629235 -0.1214202 -0.7787112 -1.3084477 0.7857412 1.1912687
## [6,] 0.9338119 -0.1445782 0.6388147 -1.4971937 0.9059855 0.2918294
## [,7] [,8] [,9] [,10]
## [1,] 0.7172570 1.7615900 -0.3970177 -1.8368473
## [2,] -0.1144703 1.3672335 -0.8027370 0.5884601
## [3,] 0.1731127 1.1960615 0.4033092 -0.3454446
## [4,] 0.9538420 1.3251154 -0.4365908 0.7574602
## [5,] 0.1822484 -0.6531892 0.4197131 -0.4385240
## [6,] -0.4483953 -0.6672845 -0.1667319 -0.4024450
library(DelayedArray)
smat2 <- scale(DelayedArray(mat))
head(smat2)
## <6 x 10> DelayedMatrix object of type "double":
## [,1] [,2] [,3] ... [,9] [,10]
## [1,] 0.2094656 -1.1540623 -1.7134655 . -0.3970177 -1.8368473
## [2,] 0.1794885 -0.3075691 1.0462473 . -0.8027370 0.5884601
## [3,] 0.6314098 1.5269588 0.2689313 . 0.4033092 -0.3454446
## [4,] 0.7427763 -0.5018108 -0.5435497 . -0.4365908 0.7574602
## [5,] 0.6629235 -0.1214202 -0.7787112 . 0.4197131 -0.4385240
## [6,] 0.9338119 -0.1445782 0.6388147 . -0.1667319 -0.4024450
library(ScaledMatrix)
smat3 <- ScaledMatrix(mat, center=TRUE, scale=TRUE)
head(smat3)
## <6 x 10> ScaledMatrix object of type "double":
## [,1] [,2] [,3] ... [,9] [,10]
## [1,] 0.2094656 -1.1540623 -1.7134655 . -0.3970177 -1.8368473
## [2,] 0.1794885 -0.3075691 1.0462473 . -0.8027370 0.5884601
## [3,] 0.6314098 1.5269588 0.2689313 . 0.4033092 -0.3454446
## [4,] 0.7427763 -0.5018108 -0.5435497 . -0.4365908 0.7574602
## [5,] 0.6629235 -0.1214202 -0.7787112 . 0.4197131 -0.4385240
## [6,] 0.9338119 -0.1445782 0.6388147 . -0.1667319 -0.4024450
The biggest difference lies in how they behave in downstream matrix operations.
smat1
is an ordinary matrix, with the scaled and centered values fully realized in memory.
Nothing too unusual here.smat2
is a DelayedMatrix
and undergoes block processing whereby chunks are realized and operated on, one at a time.
This sacrifices speed for greater memory efficiency by avoiding a copy of the entire matrix.
In particular, it preserves the structure of the original mat
, e.g., from a sparse or file-backed representation.smat3
is a ScaledMatrix
that refactors certain operations so that they can be applied to the original mat
without any scaling or centering.
This takes advantage of the original data structure to speed up matrix multiplication and row/column sums,
albeit at the cost of numerical precision.Given an original matrix \(\mathbf{X}\) with \(n\) columns, a vector of column centers \(\mathbf{c}\) and a vector of column scaling values \(\mathbf{s}\), our scaled matrix can be written as:
\[ \mathbf{Y} = (\mathbf{X} - \mathbf{c} \cdot \mathbf{1}_n^T) \mathbf{S} \]
where \(\mathbf{S} = \text{diag}(s_1^{-1}, ..., s_n^{-1})\). If we wanted to right-multiply it with another matrix \(\mathbf{A}\), we would have:
\[ \mathbf{YA} = \mathbf{X}\mathbf{S}\mathbf{A} - \mathbf{c} \cdot \mathbf{1}_n^T \mathbf{S}\mathbf{A} \]
The right-most expression is simply the outer product of \(\mathbf{c}\) with the column sums of \(\mathbf{SA}\). More important is the fact that we can use the matrix multiplication operator for \(\mathbf{X}\) with \(\mathbf{SA}\), as this allows us to use highly efficient algorithms for certain data representations, e.g., sparse matrices.
library(Matrix)
mat <- rsparsematrix(20000, 10000, density=0.01)
smat <- ScaledMatrix(mat, center=TRUE, scale=TRUE)
blob <- matrix(runif(ncol(mat) * 5), ncol=5)
system.time(out <- smat %*% blob)
## user system elapsed
## 0.024 0.000 0.024
# The slower way with block processing.
da <- scale(DelayedArray(mat))
system.time(out2 <- da %*% blob)
## user system elapsed
## 27.158 8.383 35.722
The same logic applies for left-multiplication and cross-products.
This allows us to easily speed up high-level operations involving matrix multiplication by just switching to a ScaledMatrix
,
e.g., in approximate PCA algorithms from the BiocSingular package.
library(BiocSingular)
set.seed(1000)
system.time(pcs <- runSVD(smat, k=10, BSPARAM=IrlbaParam()))
## user system elapsed
## 8.145 0.328 8.473
Row and column sums are special cases of matrix multiplication and can be computed quickly:
system.time(rowSums(smat))
## user system elapsed
## 0.006 0.000 0.007
system.time(rowSums(da))
## user system elapsed
## 19.342 7.164 26.507
Subsetting, transposition and renaming of the dimensions are all supported without loss of the ScaledMatrix
representation:
smat[,1:5]
## <20000 x 5> ScaledMatrix object of type "double":
## [,1] [,2] [,3] [,4] [,5]
## [1,] 9.304206e-05 -8.638355e-03 -1.520641e-03 6.125965e-03 -5.486109e-03
## [2,] 9.304206e-05 -8.638355e-03 -1.520641e-03 6.125965e-03 -5.486109e-03
## [3,] 9.304206e-05 -8.638355e-03 -1.520641e-03 6.125965e-03 -5.486109e-03
## [4,] 9.304206e-05 -8.638355e-03 -1.520641e-03 6.125965e-03 -5.486109e-03
## [5,] 9.304206e-05 -8.638355e-03 -1.520641e-03 6.125965e-03 -5.486109e-03
## ... . . . . .
## [19996,] 9.304206e-05 -8.638355e-03 -1.520641e-03 6.125965e-03 -5.486109e-03
## [19997,] 9.304206e-05 -8.638355e-03 -1.520641e-03 6.125965e-03 -5.486109e-03
## [19998,] 9.304206e-05 -8.638355e-03 -1.520641e-03 6.125965e-03 -5.486109e-03
## [19999,] 9.304206e-05 -8.638355e-03 -1.520641e-03 6.125965e-03 -5.486109e-03
## [20000,] 9.304206e-05 -8.638355e-03 -1.520641e-03 6.125965e-03 -5.486109e-03
t(smat)
## <10000 x 20000> ScaledMatrix object of type "double":
## [,1] [,2] [,3] ... [,19999]
## [1,] 9.304206e-05 9.304206e-05 9.304206e-05 . 9.304206e-05
## [2,] -8.638355e-03 -8.638355e-03 -8.638355e-03 . -8.638355e-03
## [3,] -1.520641e-03 -1.520641e-03 -1.520641e-03 . -1.520641e-03
## [4,] 6.125965e-03 6.125965e-03 6.125965e-03 . 6.125965e-03
## [5,] -5.486109e-03 -5.486109e-03 -5.486109e-03 . -5.486109e-03
## ... . . . . .
## [9996,] -0.005679925 -0.005679925 -0.005679925 . -0.005679925
## [9997,] -0.003531756 -0.003531756 -0.003531756 . -0.003531756
## [9998,] 0.002662219 0.002662219 0.002662219 . 0.002662219
## [9999,] 0.002236468 0.002236468 0.002236468 . 0.002236468
## [10000,] 0.003576948 0.003576948 0.003576948 . 0.003576948
## [,20000]
## [1,] 9.304206e-05
## [2,] -8.638355e-03
## [3,] -1.520641e-03
## [4,] 6.125965e-03
## [5,] -5.486109e-03
## ... .
## [9996,] -0.005679925
## [9997,] -0.003531756
## [9998,] 0.002662219
## [9999,] 0.002236468
## [10000,] 0.003576948
rownames(smat) <- paste0("GENE_", 1:20000)
smat
## <20000 x 10000> ScaledMatrix object of type "double":
## [,1] [,2] [,3] ... [,9999]
## GENE_1 9.304206e-05 -8.638355e-03 -1.520641e-03 . 0.002236468
## GENE_2 9.304206e-05 -8.638355e-03 -1.520641e-03 . 0.002236468
## GENE_3 9.304206e-05 -8.638355e-03 -1.520641e-03 . 0.002236468
## GENE_4 9.304206e-05 -8.638355e-03 -1.520641e-03 . 0.002236468
## GENE_5 9.304206e-05 -8.638355e-03 -1.520641e-03 . 0.002236468
## ... . . . . .
## GENE_19996 9.304206e-05 -8.638355e-03 -1.520641e-03 . 0.002236468
## GENE_19997 9.304206e-05 -8.638355e-03 -1.520641e-03 . 0.002236468
## GENE_19998 9.304206e-05 -8.638355e-03 -1.520641e-03 . 0.002236468
## GENE_19999 9.304206e-05 -8.638355e-03 -1.520641e-03 . 0.002236468
## GENE_20000 9.304206e-05 -8.638355e-03 -1.520641e-03 . 0.002236468
## [,10000]
## GENE_1 0.003576948
## GENE_2 0.003576948
## GENE_3 0.003576948
## GENE_4 0.003576948
## GENE_5 0.003576948
## ... .
## GENE_19996 0.003576948
## GENE_19997 0.003576948
## GENE_19998 0.003576948
## GENE_19999 0.003576948
## GENE_20000 0.003576948
Other operations will cause the ScaledMatrix
to collapse to the general DelayedMatrix
representation, after which point block processing will be used.
smat + 1
## <20000 x 10000> DelayedMatrix object of type "double":
## [,1] [,2] [,3] ... [,9999] [,10000]
## GENE_1 1.0000930 0.9913616 0.9984794 . 1.002236 1.003577
## GENE_2 1.0000930 0.9913616 0.9984794 . 1.002236 1.003577
## GENE_3 1.0000930 0.9913616 0.9984794 . 1.002236 1.003577
## GENE_4 1.0000930 0.9913616 0.9984794 . 1.002236 1.003577
## GENE_5 1.0000930 0.9913616 0.9984794 . 1.002236 1.003577
## ... . . . . . .
## GENE_19996 1.0000930 0.9913616 0.9984794 . 1.002236 1.003577
## GENE_19997 1.0000930 0.9913616 0.9984794 . 1.002236 1.003577
## GENE_19998 1.0000930 0.9913616 0.9984794 . 1.002236 1.003577
## GENE_19999 1.0000930 0.9913616 0.9984794 . 1.002236 1.003577
## GENE_20000 1.0000930 0.9913616 0.9984794 . 1.002236 1.003577
For most part, the implementation of the multiplication assumes that the \(\mathbf{A}\) matrix and the matrix product are small compared to \(\mathbf{X}\).
It is also possible to multiply two ScaledMatrix
es together if the underlying matrices have efficient operators for their product.
However, if this is not the case, the ScaledMatrix
offers little benefit for increased overhead.
It is also worth noting that this speed-up is not entirely free.
The expression above involves subtracting two matrix with potentially large values, which runs the risk of catastrophic cancellation.
The example below demonstrates how ScaledMatrix
is more susceptible to loss of precision than a normal DelayedArray
:
set.seed(1000)
mat <- matrix(rnorm(1000000), ncol=100000)
big.mat <- mat + 1e12
# The 'correct' value, unaffected by numerical precision.
ref <- rowMeans(scale(mat))
head(ref)
## [1] -0.0025584703 -0.0008570664 -0.0019225335 -0.0001039903 0.0024761772
## [6] 0.0032943203
# The value from scale'ing a DelayedArray.
library(DelayedArray)
smat2 <- scale(DelayedArray(big.mat))
head(rowMeans(smat2))
## [1] -0.0025583534 -0.0008571123 -0.0019226040 -0.0001039539 0.0024761618
## [6] 0.0032943783
# The value from a ScaledMatrix.
library(ScaledMatrix)
smat3 <- ScaledMatrix(big.mat, center=TRUE, scale=TRUE)
head(rowMeans(smat3))
## [1] -0.00480 0.00848 0.00544 -0.00976 -0.01056 0.01520
In most practical applications, though, this does not seem to be a major concern, especially as most values (e.g., log-normalized expression matrices) lie close to zero anyway.
sessionInfo()
## R version 4.4.0 beta (2024-04-15 r86425)
## Platform: x86_64-pc-linux-gnu
## Running under: Ubuntu 22.04.4 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.19-bioc/R/lib/libRblas.so
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## time zone: America/New_York
## tzcode source: system (glibc)
##
## attached base packages:
## [1] stats4 stats graphics grDevices utils datasets methods
## [8] base
##
## other attached packages:
## [1] BiocSingular_1.20.0 ScaledMatrix_1.12.0 DelayedArray_0.30.0
## [4] SparseArray_1.4.0 S4Arrays_1.4.0 abind_1.4-5
## [7] IRanges_2.38.0 S4Vectors_0.42.0 MatrixGenerics_1.16.0
## [10] matrixStats_1.3.0 BiocGenerics_0.50.0 Matrix_1.7-0
## [13] BiocStyle_2.32.0
##
## loaded via a namespace (and not attached):
## [1] jsonlite_1.8.8 compiler_4.4.0
## [3] BiocManager_1.30.22 crayon_1.5.2
## [5] rsvd_1.0.5 Rcpp_1.0.12
## [7] DelayedMatrixStats_1.26.0 parallel_4.4.0
## [9] jquerylib_0.1.4 BiocParallel_1.38.0
## [11] yaml_2.3.8 fastmap_1.1.1
## [13] lattice_0.22-6 R6_2.5.1
## [15] XVector_0.44.0 knitr_1.46
## [17] bookdown_0.39 bslib_0.7.0
## [19] rlang_1.1.3 cachem_1.0.8
## [21] xfun_0.43 sass_0.4.9
## [23] cli_3.6.2 zlibbioc_1.50.0
## [25] digest_0.6.35 grid_4.4.0
## [27] irlba_2.3.5.1 sparseMatrixStats_1.16.0
## [29] lifecycle_1.0.4 evaluate_0.23
## [31] codetools_0.2-20 beachmat_2.20.0
## [33] rmarkdown_2.26 tools_4.4.0
## [35] htmltools_0.5.8.1