Data integration and batch correction¶
An important task of single-cell analysis is the integration of several samples, which we can perform with omicverse.
Here we demonstrate how to merge data using omicverse and perform a corrective analysis for batch effects. We provide a total of 4 methods for batch effect correction in omicverse, including harmony, scanorama and combat which do not require GPU, and SIMBA which requires GPU. if available, we recommend using GPU-based scVI and scANVI to get the best batch effect correction results.
import omicverse as ov
#print(f"omicverse version: {ov.__version__}")
import scanpy as sc
#print(f"scanpy version: {sc.__version__}")
ov.utils.ov_plot_set()
____ _ _ __ / __ \____ ___ (_)___| | / /__ _____________ / / / / __ `__ \/ / ___/ | / / _ \/ ___/ ___/ _ \ / /_/ / / / / / / / /__ | |/ / __/ / (__ ) __/ \____/_/ /_/ /_/_/\___/ |___/\___/_/ /____/\___/ Version: 1.5.3, Tutorials: https://omicverse.readthedocs.io/
Data integration¶
First, we need to concat the data of scRNA-seq from different batch. We can use sc.concat
to perform it。
The dataset we will use to demonstrate data integration contains several samples of bone marrow mononuclear cells. These samples were originally created for the Open Problems in Single-Cell Analysis NeurIPS Competition 2021.
We selected sample of s1d3
, s2d1
and s3d7
to perform integrate. The individual data can be downloaded from figshare.
- s1d3:
- s2d1:
- s3d7:
adata1=ov.read('neurips2021_s1d3.h5ad')
adata1.obs['batch']='s1d3'
adata2=ov.read('neurips2021_s2d1.h5ad')
adata2.obs['batch']='s2d1'
adata3=ov.read('neurips2021_s3d7.h5ad')
adata3.obs['batch']='s3d7'
adata=sc.concat([adata1,adata2,adata3],merge='same')
adata
AnnData object with n_obs × n_vars = 27423 × 13953 obs: 'GEX_n_genes_by_counts', 'GEX_pct_counts_mt', 'GEX_size_factors', 'GEX_phase', 'ADT_n_antibodies_by_counts', 'ADT_total_counts', 'ADT_iso_count', 'cell_type', 'batch', 'ADT_pseudotime_order', 'GEX_pseudotime_order', 'Samplename', 'Site', 'DonorNumber', 'Modality', 'VendorLot', 'DonorID', 'DonorAge', 'DonorBMI', 'DonorBloodType', 'DonorRace', 'Ethnicity', 'DonorGender', 'QCMeds', 'DonorSmoker', 'is_train' var: 'feature_types', 'gene_id' obsm: 'ADT_X_pca', 'ADT_X_umap', 'ADT_isotype_controls', 'GEX_X_pca', 'GEX_X_umap' layers: 'counts'
We can see that there are now three elements in the batch
adata.obs['batch'].unique()
array(['s1d3', 's2d1', 's3d7'], dtype=object)
import numpy as np
adata.X=adata.X.astype(np.int64)
Data preprocess and Batch visualize¶
We first performed quality control of the data and normalisation with screening for highly variable genes. Then visualise potential batch effects in the data.
Here, we can set batch_key=batch
to correct the doublet detectation and Highly variable genes identifcation.
adata=ov.pp.qc(adata,
tresh={'mito_perc': 0.2, 'nUMIs': 500, 'detected_genes': 250},
batch_key='batch')
adata
Calculate QC metrics End calculation of QC metrics. Original cell number: 27423 Begin of post doublets removal and QC plot Running Scrublet filtered out 116 genes that are detected in less than 3 cells normalizing counts per cell finished (0:00:00) extracting highly variable genes finished (0:00:00) --> added 'highly_variable', boolean vector (adata.var) 'means', float vector (adata.var) 'dispersions', float vector (adata.var) 'dispersions_norm', float vector (adata.var) normalizing counts per cell finished (0:00:00) normalizing counts per cell finished (0:00:00) Embedding transcriptomes using PCA... Automatically set threshold at doublet score = 0.52 Detected doublet rate = 0.0% Estimated detectable doublet fraction = 11.5% Overall doublet rate: Expected = 5.0% Estimated = 0.3% filtered out 26 genes that are detected in less than 3 cells normalizing counts per cell finished (0:00:00) extracting highly variable genes finished (0:00:00) --> added 'highly_variable', boolean vector (adata.var) 'means', float vector (adata.var) 'dispersions', float vector (adata.var) 'dispersions_norm', float vector (adata.var) normalizing counts per cell finished (0:00:00) normalizing counts per cell finished (0:00:00) Embedding transcriptomes using PCA... Automatically set threshold at doublet score = 0.50 Detected doublet rate = 0.0% Estimated detectable doublet fraction = 36.7% Overall doublet rate: Expected = 5.0% Estimated = 0.0% filtered out 11 genes that are detected in less than 3 cells normalizing counts per cell finished (0:00:00) extracting highly variable genes finished (0:00:00) --> added 'highly_variable', boolean vector (adata.var) 'means', float vector (adata.var) 'dispersions', float vector (adata.var) 'dispersions_norm', float vector (adata.var) normalizing counts per cell finished (0:00:00) normalizing counts per cell finished (0:00:00) Embedding transcriptomes using PCA... Automatically set threshold at doublet score = 0.52 Detected doublet rate = 0.1% Estimated detectable doublet fraction = 21.3% Overall doublet rate: Expected = 5.0% Estimated = 0.3% Scrublet finished (0:00:20) Cells retained after scrublet: 27412, 11 removed. End of post doublets removal and QC plots. Filters application (seurat or mads) Lower treshold, nUMIs: 500; filtered-out-cells: 0 Lower treshold, n genes: 250; filtered-out-cells: 420 Lower treshold, mito %: 0.2; filtered-out-cells: 285 Filters applicated. Total cell filtered out with this last --mode seurat QC (and its chosen options): 705 Cells retained after scrublet and seurat filtering: 26707, 716 removed.
AnnData object with n_obs × n_vars = 26707 × 13953 obs: 'GEX_n_genes_by_counts', 'GEX_pct_counts_mt', 'GEX_size_factors', 'GEX_phase', 'ADT_n_antibodies_by_counts', 'ADT_total_counts', 'ADT_iso_count', 'cell_type', 'batch', 'ADT_pseudotime_order', 'GEX_pseudotime_order', 'Samplename', 'Site', 'DonorNumber', 'Modality', 'VendorLot', 'DonorID', 'DonorAge', 'DonorBMI', 'DonorBloodType', 'DonorRace', 'Ethnicity', 'DonorGender', 'QCMeds', 'DonorSmoker', 'is_train', 'nUMIs', 'mito_perc', 'detected_genes', 'cell_complexity', 'n_genes', 'doublet_score', 'predicted_doublet', 'passing_mt', 'passing_nUMIs', 'passing_ngenes' var: 'feature_types', 'gene_id', 'mt', 'n_cells' uns: 'scrublet' obsm: 'ADT_X_pca', 'ADT_X_umap', 'ADT_isotype_controls', 'GEX_X_pca', 'GEX_X_umap' layers: 'counts'
We can store the raw counts if we need the raw counts after filtered the HVGs.
adata=ov.pp.preprocess(adata,mode='shiftlog|pearson',
n_HVGs=3000,batch_key=None)
adata
Begin robust gene identification After filtration, 13953/13953 genes are kept. Among 13953 genes, 13953 genes are robust. End of robust gene identification. Begin size normalization: shiftlog and HVGs selection pearson normalizing counts per cell The following highly-expressed genes are not considered during normalization factor computation: ['IGKC', 'HBB', 'MALAT1', 'IGHA1', 'IGHM', 'HBA2', 'IGLC1', 'IGLC2', 'IGLC3'] finished (0:00:00) extracting highly variable genes --> added 'highly_variable', boolean vector (adata.var) 'highly_variable_rank', float vector (adata.var) 'highly_variable_nbatches', int vector (adata.var) 'highly_variable_intersection', boolean vector (adata.var) 'means', float vector (adata.var) 'variances', float vector (adata.var) 'residual_variances', float vector (adata.var) End of size normalization: shiftlog and HVGs selection pearson
AnnData object with n_obs × n_vars = 26707 × 13953 obs: 'GEX_n_genes_by_counts', 'GEX_pct_counts_mt', 'GEX_size_factors', 'GEX_phase', 'ADT_n_antibodies_by_counts', 'ADT_total_counts', 'ADT_iso_count', 'cell_type', 'batch', 'ADT_pseudotime_order', 'GEX_pseudotime_order', 'Samplename', 'Site', 'DonorNumber', 'Modality', 'VendorLot', 'DonorID', 'DonorAge', 'DonorBMI', 'DonorBloodType', 'DonorRace', 'Ethnicity', 'DonorGender', 'QCMeds', 'DonorSmoker', 'is_train', 'nUMIs', 'mito_perc', 'detected_genes', 'cell_complexity', 'n_genes', 'doublet_score', 'predicted_doublet', 'passing_mt', 'passing_nUMIs', 'passing_ngenes' var: 'feature_types', 'gene_id', 'mt', 'n_cells', 'percent_cells', 'robust', 'mean', 'var', 'residual_variances', 'highly_variable_rank', 'highly_variable_features' uns: 'scrublet', 'layers_counts', 'log1p', 'hvg' obsm: 'ADT_X_pca', 'ADT_X_umap', 'ADT_isotype_controls', 'GEX_X_pca', 'GEX_X_umap' layers: 'counts'
adata.raw = adata
adata = adata[:, adata.var.highly_variable_features]
adata
View of AnnData object with n_obs × n_vars = 26707 × 3000 obs: 'GEX_n_genes_by_counts', 'GEX_pct_counts_mt', 'GEX_size_factors', 'GEX_phase', 'ADT_n_antibodies_by_counts', 'ADT_total_counts', 'ADT_iso_count', 'cell_type', 'batch', 'ADT_pseudotime_order', 'GEX_pseudotime_order', 'Samplename', 'Site', 'DonorNumber', 'Modality', 'VendorLot', 'DonorID', 'DonorAge', 'DonorBMI', 'DonorBloodType', 'DonorRace', 'Ethnicity', 'DonorGender', 'QCMeds', 'DonorSmoker', 'is_train', 'nUMIs', 'mito_perc', 'detected_genes', 'cell_complexity', 'n_genes', 'doublet_score', 'predicted_doublet', 'passing_mt', 'passing_nUMIs', 'passing_ngenes' var: 'feature_types', 'gene_id', 'mt', 'n_cells', 'percent_cells', 'robust', 'mean', 'var', 'residual_variances', 'highly_variable_rank', 'highly_variable_features' uns: 'scrublet', 'layers_counts', 'log1p', 'hvg' obsm: 'ADT_X_pca', 'ADT_X_umap', 'ADT_isotype_controls', 'GEX_X_pca', 'GEX_X_umap' layers: 'counts'
We can save the pre-processed data.
adata.write_h5ad('neurips2021_batch_normlog.h5ad',compression='gzip')
Similarly, we calculated PCA for HVGs and visualised potential batch effects in the data using pymde. pymde is GPU-accelerated UMAP.
ov.pp.scale(adata)
ov.pp.pca(adata,layer='scaled',n_pcs=50,mask_var='highly_variable_features')
adata.obsm["X_mde_pca"] = ov.utils.mde(adata.obsm["scaled|original|X_pca"])
... as `zero_center=True`, sparse input is densified and may lead to large memory consumption
There is a very clear batch effect in the data
ov.utils.embedding(adata,
basis='X_mde_pca',frameon='small',
color=['batch','cell_type'],show=False)
Harmony¶
Harmony is an algorithm for performing integration of single cell genomics datasets. Please check out manuscript on Nature Methods.
The function ov.single.batch_correction
can be set in three methods: harmony
,combat
and scanorama
adata_harmony=ov.single.batch_correction(adata,batch_key='batch',
methods='harmony',n_pcs=50)
adata
...Begin using harmony to correct batch effect ... as `zero_center=True`, sparse input is densified and may lead to large memory consumption
2023-11-19 20:25:03,351 - harmonypy - INFO - Computing initial centroids with sklearn.KMeans... INFO:harmonypy:Computing initial centroids with sklearn.KMeans... 2023-11-19 20:25:12,444 - harmonypy - INFO - sklearn.KMeans initialization complete. INFO:harmonypy:sklearn.KMeans initialization complete. 2023-11-19 20:25:12,725 - harmonypy - INFO - Iteration 1 of 10 INFO:harmonypy:Iteration 1 of 10 2023-11-19 20:25:19,161 - harmonypy - INFO - Iteration 2 of 10 INFO:harmonypy:Iteration 2 of 10 2023-11-19 20:25:25,779 - harmonypy - INFO - Iteration 3 of 10 INFO:harmonypy:Iteration 3 of 10 2023-11-19 20:25:32,350 - harmonypy - INFO - Iteration 4 of 10 INFO:harmonypy:Iteration 4 of 10 2023-11-19 20:25:38,889 - harmonypy - INFO - Iteration 5 of 10 INFO:harmonypy:Iteration 5 of 10 2023-11-19 20:25:43,768 - harmonypy - INFO - Converged after 5 iterations INFO:harmonypy:Converged after 5 iterations
AnnData object with n_obs × n_vars = 26707 × 3000 obs: 'GEX_n_genes_by_counts', 'GEX_pct_counts_mt', 'GEX_size_factors', 'GEX_phase', 'ADT_n_antibodies_by_counts', 'ADT_total_counts', 'ADT_iso_count', 'cell_type', 'batch', 'ADT_pseudotime_order', 'GEX_pseudotime_order', 'Samplename', 'Site', 'DonorNumber', 'Modality', 'VendorLot', 'DonorID', 'DonorAge', 'DonorBMI', 'DonorBloodType', 'DonorRace', 'Ethnicity', 'DonorGender', 'QCMeds', 'DonorSmoker', 'is_train', 'nUMIs', 'mito_perc', 'detected_genes', 'cell_complexity', 'n_genes', 'doublet_score', 'predicted_doublet', 'passing_mt', 'passing_nUMIs', 'passing_ngenes', 'topic_0', 'topic_1', 'topic_2', 'topic_3', 'topic_4', 'topic_5', 'topic_6', 'topic_7', 'topic_8', 'topic_9', 'topic_10', 'topic_11', 'topic_12', 'topic_13', 'topic_14', 'LDA_cluster' var: 'feature_types', 'gene_id', 'mt', 'n_cells', 'percent_cells', 'robust', 'mean', 'var', 'residual_variances', 'highly_variable_rank', 'highly_variable_features' uns: 'scrublet', 'layers_counts', 'log1p', 'hvg', 'scaled|original|pca_var_ratios', 'scaled|original|cum_sum_eigenvalues', 'batch_colors', 'cell_type_colors', 'topic_dendogram' obsm: 'ADT_X_pca', 'ADT_X_umap', 'ADT_isotype_controls', 'GEX_X_pca', 'GEX_X_umap', 'scaled|original|X_pca', 'X_mde_pca', 'X_topic_compositions', 'X_umap_features', 'X_mde_mira', 'X_mde_mira_topic', 'X_mde_mira_feature', 'X_harmony' varm: 'scaled|original|pca_loadings', 'topic_feature_compositions', 'topic_feature_activations' layers: 'counts', 'scaled', 'lognorm'
adata.obsm["X_mde_harmony"] = ov.utils.mde(adata.obsm["X_harmony"])
ov.utils.embedding(adata,
basis='X_mde_harmony',frameon='small',
color=['batch','cell_type'],show=False)
Combat¶
combat is a batch effect correction method that is very widely used in bulk RNA-seq, and it works just as well on single-cell sequencing data.
adata_combat=ov.single.batch_correction(adata,batch_key='batch',
methods='combat',n_pcs=50)
adata
...Begin using combat to correct batch effect Standardizing Data across genes. Found 3 batches Found 0 numerical variables: Fitting L/S model and finding priors Finding parametric adjustments Adjusting data
AnnData object with n_obs × n_vars = 26707 × 3000 obs: 'GEX_n_genes_by_counts', 'GEX_pct_counts_mt', 'GEX_size_factors', 'GEX_phase', 'ADT_n_antibodies_by_counts', 'ADT_total_counts', 'ADT_iso_count', 'cell_type', 'batch', 'ADT_pseudotime_order', 'GEX_pseudotime_order', 'Samplename', 'Site', 'DonorNumber', 'Modality', 'VendorLot', 'DonorID', 'DonorAge', 'DonorBMI', 'DonorBloodType', 'DonorRace', 'Ethnicity', 'DonorGender', 'QCMeds', 'DonorSmoker', 'is_train', 'nUMIs', 'mito_perc', 'detected_genes', 'cell_complexity', 'n_genes', 'doublet_score', 'predicted_doublet', 'passing_mt', 'passing_nUMIs', 'passing_ngenes', 'topic_0', 'topic_1', 'topic_2', 'topic_3', 'topic_4', 'topic_5', 'topic_6', 'topic_7', 'topic_8', 'topic_9', 'topic_10', 'topic_11', 'topic_12', 'topic_13', 'topic_14', 'LDA_cluster' var: 'feature_types', 'gene_id', 'mt', 'n_cells', 'percent_cells', 'robust', 'mean', 'var', 'residual_variances', 'highly_variable_rank', 'highly_variable_features' uns: 'scrublet', 'layers_counts', 'log1p', 'hvg', 'scaled|original|pca_var_ratios', 'scaled|original|cum_sum_eigenvalues', 'batch_colors', 'cell_type_colors', 'topic_dendogram' obsm: 'ADT_X_pca', 'ADT_X_umap', 'ADT_isotype_controls', 'GEX_X_pca', 'GEX_X_umap', 'scaled|original|X_pca', 'X_mde_pca', 'X_topic_compositions', 'X_umap_features', 'X_mde_mira', 'X_mde_mira_topic', 'X_mde_mira_feature', 'X_harmony', 'X_mde_harmony', 'X_combat' varm: 'scaled|original|pca_loadings', 'topic_feature_compositions', 'topic_feature_activations' layers: 'counts', 'scaled', 'lognorm'
adata.obsm["X_mde_combat"] = ov.utils.mde(adata.obsm["X_combat"])
ov.utils.embedding(adata,
basis='X_mde_combat',frameon='small',
color=['batch','cell_type'],show=False)
scanorama¶
Integration of single-cell RNA sequencing (scRNA-seq) data from multiple experiments, laboratories and technologies can uncover biological insights, but current methods for scRNA-seq data integration are limited by a requirement for datasets to derive from functionally similar cells. We present Scanorama, an algorithm that identifies and merges the shared cell types among all pairs of datasets and accurately integrates heterogeneous collections of scRNA-seq data.
adata_scanorama=ov.single.batch_correction(adata,batch_key='batch',
methods='scanorama',n_pcs=50)
adata
...Begin using scanorama to correct batch effect s1d3 s2d1 s3d7 Found 3000 genes among all datasets [[0. 0.50093205 0.5758346 ] [0. 0. 0.60733037] [0. 0. 0. ]] Processing datasets (1, 2) Processing datasets (0, 2) Processing datasets (0, 1) (26707, 50)
AnnData object with n_obs × n_vars = 26707 × 3000 obs: 'GEX_n_genes_by_counts', 'GEX_pct_counts_mt', 'GEX_size_factors', 'GEX_phase', 'ADT_n_antibodies_by_counts', 'ADT_total_counts', 'ADT_iso_count', 'cell_type', 'batch', 'ADT_pseudotime_order', 'GEX_pseudotime_order', 'Samplename', 'Site', 'DonorNumber', 'Modality', 'VendorLot', 'DonorID', 'DonorAge', 'DonorBMI', 'DonorBloodType', 'DonorRace', 'Ethnicity', 'DonorGender', 'QCMeds', 'DonorSmoker', 'is_train', 'nUMIs', 'mito_perc', 'detected_genes', 'cell_complexity', 'n_genes', 'doublet_score', 'predicted_doublet', 'passing_mt', 'passing_nUMIs', 'passing_ngenes', 'topic_0', 'topic_1', 'topic_2', 'topic_3', 'topic_4', 'topic_5', 'topic_6', 'topic_7', 'topic_8', 'topic_9', 'topic_10', 'topic_11', 'topic_12', 'topic_13', 'topic_14', 'LDA_cluster' var: 'feature_types', 'gene_id', 'mt', 'n_cells', 'percent_cells', 'robust', 'mean', 'var', 'residual_variances', 'highly_variable_rank', 'highly_variable_features' uns: 'scrublet', 'layers_counts', 'log1p', 'hvg', 'scaled|original|pca_var_ratios', 'scaled|original|cum_sum_eigenvalues', 'batch_colors', 'cell_type_colors', 'topic_dendogram' obsm: 'ADT_X_pca', 'ADT_X_umap', 'ADT_isotype_controls', 'GEX_X_pca', 'GEX_X_umap', 'scaled|original|X_pca', 'X_mde_pca', 'X_topic_compositions', 'X_umap_features', 'X_mde_mira', 'X_mde_mira_topic', 'X_mde_mira_feature', 'X_harmony', 'X_mde_harmony', 'X_combat', 'X_mde_combat', 'X_scanorama' varm: 'scaled|original|pca_loadings', 'topic_feature_compositions', 'topic_feature_activations' layers: 'counts', 'scaled', 'lognorm'
adata.obsm["X_mde_scanorama"] = ov.utils.mde(adata.obsm["X_scanorama"])
ov.utils.embedding(adata,
basis='X_mde_scanorama',frameon='small',
color=['batch','cell_type'],show=False)
scVI¶
An important task of single-cell analysis is the integration of several samples, which we can perform with scVI. For integration, scVI treats the data as unlabelled. When our dataset is fully labelled (perhaps in independent studies, or independent analysis pipelines), we can obtain an integration that better preserves biology using scANVI, which incorporates cell type annotation information. Here we demonstrate this functionality with an integrated analysis of cells from the lung atlas integration task from the scIB manuscript. The same pipeline would generally be used to analyze any collection of scRNA-seq datasets.
adata_scvi=ov.single.batch_correction(adata,batch_key='batch',
methods='scVI',n_layers=2, n_latent=30, gene_likelihood="nb")
adata
...Begin using scVI to correct batch effect
Global seed set to 0 No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 300/300: 100%|██████████| 300/300 [05:51<00:00, 1.18s/it, loss=1.51e+03, v_num=1]
`Trainer.fit` stopped: `max_epochs=300` reached.
Epoch 300/300: 100%|██████████| 300/300 [05:51<00:00, 1.17s/it, loss=1.51e+03, v_num=1]
AnnData object with n_obs × n_vars = 26707 × 3000 obs: 'GEX_n_genes_by_counts', 'GEX_pct_counts_mt', 'GEX_size_factors', 'GEX_phase', 'ADT_n_antibodies_by_counts', 'ADT_total_counts', 'ADT_iso_count', 'cell_type', 'batch', 'ADT_pseudotime_order', 'GEX_pseudotime_order', 'Samplename', 'Site', 'DonorNumber', 'Modality', 'VendorLot', 'DonorID', 'DonorAge', 'DonorBMI', 'DonorBloodType', 'DonorRace', 'Ethnicity', 'DonorGender', 'QCMeds', 'DonorSmoker', 'is_train', 'nUMIs', 'mito_perc', 'detected_genes', 'cell_complexity', 'n_genes', 'doublet_score', 'predicted_doublet', 'passing_mt', 'passing_nUMIs', 'passing_ngenes', 'topic_0', 'topic_1', 'topic_2', 'topic_3', 'topic_4', 'topic_5', 'topic_6', 'topic_7', 'topic_8', 'topic_9', 'topic_10', 'topic_11', 'topic_12', 'topic_13', 'topic_14', 'LDA_cluster', '_scvi_batch', '_scvi_labels' var: 'feature_types', 'gene_id', 'mt', 'n_cells', 'percent_cells', 'robust', 'mean', 'var', 'residual_variances', 'highly_variable_rank', 'highly_variable_features' uns: 'batch_colors', 'cell_type_colors', 'hvg', 'layers_counts', 'log1p', 'scaled|original|cum_sum_eigenvalues', 'scaled|original|pca_var_ratios', 'scrublet', 'topic_dendogram', '_scvi_uuid', '_scvi_manager_uuid' obsm: 'ADT_X_pca', 'ADT_X_umap', 'ADT_isotype_controls', 'GEX_X_pca', 'GEX_X_umap', 'X_combat', 'X_harmony', 'X_mde_combat', 'X_mde_harmony', 'X_mde_mira', 'X_mde_mira_feature', 'X_mde_mira_topic', 'X_mde_pca', 'X_mde_scanorama', 'X_scanorama', 'X_topic_compositions', 'X_umap_features', 'scaled|original|X_pca', 'X_scVI' varm: 'scaled|original|pca_loadings', 'topic_feature_activations', 'topic_feature_compositions' layers: 'counts', 'lognorm', 'scaled'
adata.obsm["X_mde_scVI"] = ov.utils.mde(adata.obsm["X_scVI"])
ov.utils.embedding(adata,
basis='X_mde_scVI',frameon='small',
color=['batch','cell_type'],show=False)
MIRA+CODAL¶
Topic modeling of batched single-cell data is challenging because these models cannot typically distinguish between biological and technical effects of the assay. CODAL (COvariate Disentangling Augmented Loss) uses a novel mutual information regularization technique to explicitly disentangle these two sources of variation.
LDA_obj=ov.utils.LDA_topic(adata,feature_type='expression',
highly_variable_key='highly_variable_features',
layers='counts',batch_key='batch',learning_rate=1e-3)
INFO:mira.adata_interface.topic_model:Predicting expression from genes from col: highly_variable_features
mira have been install version: 2.1.0
Gathering dataset statistics: 0%| | 0/26707 [00:00<?, ?it/s]
Learning rate range test: 0%| | 0/98 [00:00<?, ?it/s]
INFO:mira.topic_model.base:Set learning rates to: (0.0061957597093704065, 0.22248668375233174)
LDA_obj.predicted(15)
INFO:mira.adata_interface.topic_model:Predicting expression from genes from col: highly_variable_features
running LDA topic predicted
Gathering dataset statistics: 0%| | 0/26707 [00:00<?, ?it/s]
Epoch 0: 0%| | 0/24 [00:00<?, ?it/s]
INFO:mira.topic_model.base:Moving model to device: cpu
Predicting latent vars: 0%| | 0/105 [00:00<?, ?it/s]
INFO:mira.adata_interface.core:Added key to obsm: X_topic_compositions INFO:mira.adata_interface.core:Added key to obsm: X_umap_features INFO:mira.adata_interface.topic_model:Added cols: topic_0, topic_1, topic_2, topic_3, topic_4, topic_5, topic_6, topic_7, topic_8, topic_9, topic_10, topic_11, topic_12, topic_13, topic_14 INFO:mira.adata_interface.core:Added key to varm: topic_feature_compositions INFO:mira.adata_interface.core:Added key to varm: topic_feature_activations INFO:mira.adata_interface.topic_model:Added key to uns: topic_dendogram
finished: found 15 clusters and added 'LDA_cluster', the cluster labels (adata.obs, categorical)
adata.obsm["X_mde_mira_topic"] = ov.utils.mde(adata.obsm["X_topic_compositions"])
adata.obsm["X_mde_mira_feature"] = ov.utils.mde(adata.obsm["X_umap_features"])
ov.utils.embedding(adata,
basis='X_mde_mira_topic',frameon='small',
color=['batch','cell_type'],show=False)
ov.utils.embedding(adata,
basis='X_mde_mira_feature',frameon='small',
color=['batch','cell_type'],show=False)
Benchmarking test¶
The methods demonstrated here are selected based on results from benchmarking experiments including the single-cell integration benchmarking project [Luecken et al., 2021]. This project also produced a software package called scib that can be used to run a range of integration methods as well as the metrics that were used for evaluation. In this section, we show how to use this package to evaluate the quality of an integration.
adata.write_h5ad('neurips2021_batch_all.h5ad',compression='gzip')
adata=sc.read('neurips2021_batch_all.h5ad')
adata.obsm['X_pca']=adata.obsm['scaled|original|X_pca'].copy()
adata.obsm['X_mira_topic']=adata.obsm['X_topic_compositions'].copy()
adata.obsm['X_mira_feature']=adata.obsm['X_umap_features'].copy()
from scib_metrics.benchmark import Benchmarker
bm = Benchmarker(
adata,
batch_key="batch",
label_key="cell_type",
embedding_obsm_keys=["X_pca", "X_combat", "X_harmony",
'X_scanorama','X_mira_topic','X_mira_feature','X_scVI'],
n_jobs=8,
)
bm.benchmark()
We can find that harmony removes the batch effect the best of the three methods that do not use the GPU, scVI is method to remove batch effect using GPU.